CN113017832A - Puncture surgery simulation method based on virtual reality technology - Google Patents

Puncture surgery simulation method based on virtual reality technology Download PDF

Info

Publication number
CN113017832A
CN113017832A CN202110251289.4A CN202110251289A CN113017832A CN 113017832 A CN113017832 A CN 113017832A CN 202110251289 A CN202110251289 A CN 202110251289A CN 113017832 A CN113017832 A CN 113017832A
Authority
CN
China
Prior art keywords
human body
skin
model
image
guide plate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110251289.4A
Other languages
Chinese (zh)
Inventor
覃文军
杨广强
郭辉
王喆
王若雨
刘丽影
田松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Affiliated Zhongshan Hospital of Dalian University
Original Assignee
Northeastern University China
Affiliated Zhongshan Hospital of Dalian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China, Affiliated Zhongshan Hospital of Dalian University filed Critical Northeastern University China
Priority to CN202110251289.4A priority Critical patent/CN113017832A/en
Publication of CN113017832A publication Critical patent/CN113017832A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Robotics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to a puncture surgery simulation method based on a virtual reality technology, which comprises the following steps: s1, constructing a three-dimensional human body model in the virtual scene based on the specified DICOM image, wherein the human body model is a model with the same proportion as a real human body; s2, receiving and executing an operation instruction of an operator on the human body model in the virtual simulation system by means of auxiliary equipment of the virtual simulation system, so as to realize simulation of the particle puncture operation based on the human body model; and S3, acquiring the dynamic position information of the puncture needle in the particle puncture operation for evaluation according to the executed operation command track. The method of the invention fully considers the availability and the reliability and effectively realizes the virtual simulation of the particle puncture operation.

Description

Puncture surgery simulation method based on virtual reality technology
Technical Field
The invention relates to a virtual reality technology, in particular to a puncture surgery simulation method based on the virtual reality technology.
Background
With the increasing progress and continuous combination of medical technology and information technology, computer technology-based virtual reality technology has penetrated into various fields of medicine, such as virtual hospitals, virtual laboratories, virtual human bodies, visual medical treatment and the like, and the virtual reality technology is realized by generating an interactive virtual environment integrating visual, auditory, tactile and the like by a computer. The virtual reality technology is an interdisciplinary developed on the basis of computer graphics, computer simulation technology, multimedia technology, image processing and pattern recognition, artificial intelligence technology and the like, and virtual surgery becomes an important research direction in the medical field. The medical assistance is an important application thereof, and how to implement preview and planning of the puncture operation in a virtual scene based on a virtual reality technology becomes a technical problem which needs to be solved at present.
Disclosure of Invention
Technical problem to be solved
In view of the above disadvantages and shortcomings of the prior art, the present invention provides a puncture surgery simulation method based on virtual reality technology, which solves the problem that the simulation puncture surgery can not be performed outside the human body in the prior art, and the operation process is relatively safe.
(II) technical scheme
In order to achieve the purpose, the invention adopts the main technical scheme that:
in a first aspect, an embodiment of the present invention provides a puncture surgery simulation method based on a virtual reality technology, including:
s1, constructing a three-dimensional human body model in the virtual scene based on the specified DICOM image, wherein the human body model is a model with the same proportion as a real human body;
s2, receiving and executing an operation instruction of an operator on the human body model in the virtual simulation system by means of auxiliary equipment of the virtual simulation system, so as to realize simulation of the particle puncture operation based on the human body model;
and S3, acquiring the dynamic position information of the puncture needle in the particle puncture operation for evaluation according to the executed operation command track.
Optionally, the method further comprises:
s4, after receiving the instruction of generating the surgical guide plate, generating the surgical guide plate carrying partial track of the puncture needle according to the dynamic position information of the puncture needle and the associated information of the human body model to which each position information belongs;
printing the surgical guide plate based on a 3D printing technology;
or,
after receiving an instruction for generating the surgical guide plate, generating the surgical guide plate carrying partial tracks of the puncture needle according to the dynamic position information of the puncture needle and the associated information of the human body model to which each position information belongs, and automatically generating a metering planning report;
printing the operation guide plate based on the 3D printing technology.
Optionally, S1 includes:
s11, constructing a visualized part of the human body structure in a volume rendering mode based on the size and the interlayer spacing of the appointed DICOM image, wherein the visualized part comprises a cross-sectional view used for cutting and displaying in a multi-angle mode, the DICOM image comprises a CT image in a DICOM format, and the visualized part of the human body structure is a model with the same proportion as the human body in the CT image;
s12, acquiring a multi-angle human skin image, preprocessing a depth image and a color image of the human skin image, completing three-dimensional reconstruction of skin in a visualized part of a human structure through image enhancement, point cloud calculation and registration, data fusion and surface generation processes, and acquiring a skin model of an omnibearing surface skin to cover the visualized part as a three-dimensional human model;
or,
s11a, constructing a visualized part of the human body structure in a volume rendering mode based on the size and the interlayer spacing of the specified DICOM image, wherein the visualized part comprises a cross-sectional view used for cutting and displaying in a multi-angle mode, the DICOM image comprises a CT image in a DICOM format, the visualized part of the human body structure is a model with the same proportion as the human body in the CT image, the visualized part comprises a pathological region of human body tissues, and the visualized part is used as a human body model.
Optionally, S11 or S11a includes:
based on the appointed DICOM image, extracting the data of the DICOM image, preprocessing the extracted data, coloring each preprocessed data by adopting coloring logic, performing three-dimensional visualization processing on the colored data according to a ray projection algorithm, and selecting a human body structure visualization part in the three-dimensional visualization processing by means of human body parameters corresponding to an adjusting algorithm.
Optionally, S2 includes:
s21, importing the human body model into a virtual simulation system of a Unity3D platform in a binary file mode by adopting a TriLib package mode, and analyzing the human body model so as to construct the human body model in the virtual simulation system;
s22, performing operation on the human body model in the virtual simulation system by means of VR auxiliary equipment, so that the operation equipment model in the virtual simulation system is interacted with the human body model, and the particle puncture operation based on the human body model simulation is realized.
Optionally, the S22 includes:
adjusting the depth and/or path length of the light transmission by means of the handle to modify the phantom in the virtual simulation system or to display different parts of the phantom;
or,
modifying the path of light transmission by means of the handle to display a cross-section of the manikin;
or,
inputting parameters for adjusting a human body model to adjust the human body model, acquiring information of corresponding pathological change tissues in the human body model and visually positioning;
or,
the puncture needle in the surgical equipment model is operated by means of the handle to be inserted into the manikin, and the depth displayed by the manikin is adjusted to simulate the insertion position information of the puncture needle with skin and without skin of the manikin.
Optionally, S21 further includes:
if the human body model constructed in the virtual simulation system does not have skin information, receiving an instruction for increasing the skin information triggered by a user, acquiring skin surface vertex data through adjusting a volume drawing threshold value according to a pre-acquired multi-angle human body skin image, marking a skin area needing skin increase in the human body model in a Unity engine to determine a skin point set of an area to be operated, and completing reconstruction based on a Poisson reconstruction algorithm.
Optionally, S4 includes:
if the puncture operation in the virtual simulation system is simulated, receiving an instruction for generating the operation guide plate triggered by a user, acquiring various information of the puncture needle, the position of the puncture needle and a corresponding skin range, and generating the operation guide plate for printing by adopting a Poisson reconstruction algorithm, a contour extraction algorithm and a grid suture algorithm;
the operation guide plate corresponds to the position of a human body needing operation, and the angle, the length, the depth and the information of the needle inlet hole of the puncture needle are displayed on the operation guide plate.
Optionally, S4 specifically includes:
acquiring a skin range of a preset range passed by the puncture needle according to the acquired dynamic position information of the puncture needle, and screening a boundary outline for generating the operation guide plate according to the dynamic position information and a storage rule of a grid to which the skin range belongs;
aiming at the screened boundary contour, a triangular splicing method is adopted for stitching and adding the angle, the length, the depth and the information of a needle inlet hole of the puncture needle;
the repeated grid is removed, and a surgical guide plate for printing is generated.
In a second aspect, an embodiment of the present invention further provides a computing device, including: a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program stored in the memory, in particular, executes the virtual reality technology-based puncture surgery simulation method according to any one of the first aspect.
(III) advantageous effects
The invention has the beneficial effects that: the method of the invention adopts VR combined particle puncture surgery to preview and plan the surgery in a virtual scene, and simulates the vision and touch sense of a real surgery process, the main function modules comprise visual modeling, touch sense, guide plate generation and the like, an operator (such as a doctor) can simulate the surgery in the virtual scene, and a three-dimensional guide plate is generated after the surgery simulation is finished, and the position, the angle and the depth of a puncture needle in the surgery process are planned for the doctor in advance, thereby greatly increasing the puncture accuracy and reducing the corresponding surgery risk.
Drawings
Fig. 1 is a schematic flow chart of a virtual reality technology-based puncture surgery simulation method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a virtual reality technology-based puncture surgery simulation method according to another embodiment of the present invention;
FIG. 3 is a flow chart illustrating a volume rendering method used in an embodiment of the present invention;
FIG. 4 is a flow chart illustrating the use of a ray casting algorithm in an embodiment of the present invention;
FIG. 5 is a schematic view of a human body structure visualization portion in an embodiment of the invention;
FIG. 6 is a schematic diagram of a point cloud three-dimensional reconstruction effect according to an embodiment of the present disclosure;
FIG. 7 is a schematic view of a virtual simulation system according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a human-computer interaction presentation of a virtual simulation system in an embodiment of the invention;
FIG. 9 is a schematic diagram illustrating a human body display effect displayed on an operation interface in the virtual simulation system according to an embodiment of the present invention;
FIG. 10 is a schematic illustration of the effect of skin penetration in an embodiment of the present invention;
FIG. 11 is a schematic view of the effect of the surgical guide in an embodiment of the present invention;
FIG. 12 is a schematic diagram of a human body model skin implemented by Poisson reconstruction in an embodiment of the present invention;
FIG. 13 is a schematic diagram illustrating the effects of Poisson reconstruction in an embodiment of the present invention;
FIG. 14 is a schematic diagram of the extraction principle of the existing Mesh contour extraction algorithm;
FIG. 15 is a diagram illustrating the effect of contour extraction when the Mesh contour extraction algorithm is used in the embodiment of the present invention;
FIG. 16 is a schematic view of vertex translation using a mesh stitching algorithm in accordance with the present invention;
FIG. 17 is a schematic diagram of the position of the correction repeat point in the present invention;
FIG. 18 is a schematic view of the connection of triangular faces in the present invention;
FIG. 19 is a graph illustrating the stitching results after the grid stitching algorithm is applied.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
Example one
As shown in fig. 1, fig. 1 is a schematic flow chart of a virtual reality technology-based puncture surgery simulation method according to an embodiment of the present invention, where the method of the present embodiment can be implemented on any electronic device, the electronic device of the present embodiment is not limited, and the method of the present embodiment may include the following steps:
s1, constructing a three-dimensional stereoscopic human body model in the virtual scene based on the specified DICOM image, wherein the human body model is the same as the real human body in proportion.
For example, the step S1 may include the following steps:
s11, constructing a visualized part of the human body structure by adopting a volume rendering mode based on the specified DICOM image; that is, based on a designated DICOM image (such as a CT image in DICOM format), data of the DICOM image is extracted, the extracted data is preprocessed, coloring (color value and opacity value) is performed on each preprocessed data by using coloring logic, the colored data is subjected to three-dimensional visualization processing according to a ray casting algorithm, and a human body structure visualization part in the three-dimensional visualization processing is selected by means of a human body parameter corresponding to an adjusting algorithm.
S12, acquiring a multi-angle human skin image, preprocessing a depth image and a color image of the human skin image, completing three-dimensional reconstruction of skin in a visualized part of a human structure through image enhancement, point cloud calculation and registration, data fusion and surface generation processes, and acquiring a human model of the skin with an omnibearing surface as a three-dimensional human model;
the multi-angle skin image of the human body in this embodiment may be the multi-angle skin image of the region corresponding to the CT image in step S11, and is not any multi-angle skin image of the human body.
Usually, a photographing mode is adopted to acquire a multi-angle human skin image in advance, and the multi-angle human skin image is stored in a storage device of a virtual simulation system for subsequent use.
In the skin adding operation of the UI interface in the virtual simulation system described below, the skin image with the angle information in the storage device may be called to realize the skin coverage of the human body model.
In the embodiment, the depth image and the color image are preprocessed, mainly to remove spots in skin, and correspond to each image, pixel matching and other operations.
In another embodiment, the step S1 may be specifically: and constructing a visualized part of the human body structure by adopting a volume rendering mode based on the appointed DICOM image, wherein the visualized part comprises a pathological region of human body tissues and is used as a human body model. In this embodiment, the human body model is constructed outside the virtual simulation system, and after the human body model is constructed, the human body model is imported into the virtual simulation system.
And S2, receiving and executing an operation instruction of an operator on the human body model in the virtual simulation system by means of auxiliary equipment of the virtual simulation system, so as to realize simulation of the particle puncture operation based on the human body model.
For example, the human body model is imported into a virtual simulation system of the Unity3D platform in a binary file manner by using a TriLib package, and the human body model is analyzed, so that the human body model is constructed in the virtual simulation system;
by means of VR auxiliary equipment (such as a handle and the like), the operation of the human body model is carried out in the virtual simulation system, so that the operation equipment model and the human body model in the virtual simulation system are interacted, and the particle puncture operation based on the human body model simulation is realized.
For example, adjusting the depth of light transmission by means of a handle to modify the manikin in the virtual simulation system, or to display different parts of the manikin; or, modifying the path of light transmission by means of the handle to display a cross-section of the mannequin; or inputting parameters for adjusting the human body model to adjust the human body model, and acquiring information of corresponding lesion tissues in the human body model and visually positioning; or, the puncture needle in the surgical equipment model is operated by the handle to be inserted into the manikin, and the depth displayed by the manikin is adjusted to simulate the insertion position information of the puncture needle with skin and without skin of the manikin.
That is, in the interactive interface of the virtual simulation system, the operator can use the VR handle to perform interactive operation on the VR handle, including adjusting the depth and path length of the projected light, and can perform cutting and undo cutting on the human body model displayed in the interactive interface.
And S3, acquiring the dynamic position information of the puncture needle in the particle puncture operation for evaluation according to the executed operation command track.
In a specific implementation process, the method of this embodiment may further include step S4 described below, as shown in fig. 2.
S4, after receiving the instruction of generating the surgical guide plate, generating the surgical guide plate carrying the partial track of the puncture needle according to the dynamic position information of the puncture needle; printing the operation guide plate based on the 3D printing technology.
Alternatively, in another embodiment, the step S4 may be as follows:
after receiving an instruction for generating the surgical guide plate, generating the surgical guide plate carrying a partial track of the puncture needle according to the dynamic position information of the puncture needle and the corresponding area of the human body model, and automatically generating a metering planning report; printing the operation guide plate based on the 3D printing technology.
It can be understood that the operator can click the button for generating the skin or for generating the guide plate on the UI interactive interface of the virtual simulation system through the handle, and the skin of the human body model or the three-dimensional operation guide plate is generated by one key.
In this embodiment, if the simulation of the puncture surgery in the virtual simulation system is completed, an instruction for generating the surgical guide triggered by a user is received, a human result contour to be printed is extracted and rendered in a Unity engine (i.e., an engine of the virtual simulation system), and at least two network contours are stitched by using a mesh stitching algorithm to form a closed three-dimensional model for printing as the surgical guide.
Specifically, the generated three-dimensional surgical guide of the human body model can be exported from the virtual simulation system, for example, the three-dimensional surgical guide can be exported in a form of binary file, and after the export, the three-dimensional surgical guide can be subjected to 3D printing.
According to the method, the particle puncture operation can be previewed and planned in a virtual scene, the vision and touch sense of a real operation process can be simulated, main functional modules comprise visual modeling, touch sense, guide plate generation and the like, an operator (such as a doctor) can simulate the operation in the virtual scene, a three-dimensional guide plate is generated after the operation simulation is finished, and the position, the angle and the depth of a puncture needle in the operation process can be planned for the doctor in advance, so that the puncture accuracy is greatly increased, and the corresponding operation risk is reduced.
Example two
The VR-3D printing-based multi-needle puncture-assisted surgery system (i.e., virtual simulation system) first needs to acquire a plurality of CT images in DICOM format of a patient body, and the images are stacked layer by layer to convert a two-dimensional image into a three-dimensional model, so that a three-dimensional body structure of the patient can be displayed in a virtual scene according to the CT images of the patient by a volume rendering method, and different and specific body parts of the patient are displayed by adjusting the depth of light and the length of a projection path, so as to observe a tumor part and plan a surgery scheme. For the display of the real human body model, a Poisson reconstruction method is adopted to reconstruct the real skin model in three dimensions according to the shooting of the color image and the depth image of the patient from multiple angles, and the real skin model is perfectly covered on the previous human body model. After the isometric and real body model is displayed in the virtual scene (the reconstructed body model outside the virtual simulation system is imported into the virtual simulation system), the doctor can simulate and train the particle puncture operation and plan and navigate the operation in the virtual scene. During a real-time surgical procedure, the physician may also adjust and view the mannequin in real time, and modify and adjust the surgical plan in real time.
An operator such as a doctor can observe and research a patient human body model in a virtual scene, carefully observe and research the tumor position of the patient by modifying the human body model, assist the doctor to plan the insertion position, angle and depth of a puncture needle, make an operation scheme under the condition of minimum injury to a simulated patient, simulate a particle puncture operation and perform corresponding operation training, improve the proficiency of the operation, and enhance the accuracy and safety of the actual operation.
After a doctor formulates an operation scheme and simulates the particle operation, the system can obtain various information of the puncture needle, information of a human skin model of a patient and a skin range needing the operation, a three-dimensional operation guide plate is generated by adopting a Poisson reconstruction algorithm, a contour extraction algorithm and a grid suture algorithm, the three-dimensional operation guide plate can perfectly fit with a human body part needing the operation, a needle inlet hole of the puncture needle is reconstructed on the operation guide plate according to the angle, the length and the thickness of the puncture needle, and the position, the depth and the angle of the puncture needle during the operation of the doctor are recorded. After the operation guide plate is generated, the operation guide plate can be exported to be a three-dimensional model, then the 3D printing technology is adopted to print the operation guide plate, when the real particle puncture operation is carried out, the operation guide plate is used for carrying out intraoperative navigation, the guide plate can perfectly cover the human body part of a patient needing the operation, a corresponding puncture needle is selected according to the thickness of a needle inlet hole of the operation guide plate, and the puncture needle is inserted according to the position angle of the needle inlet hole, the scale marked on the guide plate, the operation scheme made by a doctor and the depth of the puncture needle, wherein the operation scheme is recorded by a system.
The algorithm for each stage is described independently below.
1. Volume rendering
The method realizes the three-dimensional visualization effect of the human body structure through the volume rendering, and modifies and optimizes the volume rendering algorithm according to the actual requirement on the basis of basic display. In order to draw a display effect approaching to the real, a three-dimensional model for displaying a human body structure in an equal proportion according to the size of a CT image and the interlayer distance is added, and the effect of approaching to the real human body in a VR scene is realized. The display of a specific part is added, part of specific human body structures, such as the human body structures of a tumor part, are displayed, and part of human body structures which are not needed by a doctor are hidden. Cross-sectional views in any direction showing the anatomical model by cutting are added. Through showing the human structure of tumour position, adjusting light depth and route and the view of the cross section of all directions, very big having made things convenient for the doctor to carry out a more directly perceived and careful research to tumour position, made more suitable research scheme.
Volume rendering is a technique for generating two-dimensional images on a screen directly from a three-dimensional data field, and has the advantage that the entirety and the entire appearance of the three-dimensional data field can be observed from the generated images, and parallel processing is also easy. The ray projection algorithm is a direct volume rendering algorithm based on an image sequence, a ray is emitted from each pixel of an image along a fixed direction (usually, a sight line direction), the ray passes through the whole image sequence, in the process, the image sequence is sampled to obtain color information, meanwhile, color values are accumulated according to a ray absorption model until the ray passes through the whole image sequence, and finally, the obtained color values are colors of rendered images, and fig. 3 shows a schematic diagram of a ray transmission algorithm.
The ray projection algorithm starts from each pixel point f (x, y) of an image space, projects a ray I along the direction of a sight line, the ray penetrates through a three-dimensional data field in a fixed step length, K equidistant sampling points are selected along the ray, and the color value and the opacity of the sampling point are solved by performing cubic linear interpolation on the color value and the opacity of 8 data points closest to the sampling point. The color value and the opacity value of each sampling point on each ray are synthesized from front to back or from back to front, and the color value of the pixel emitting the ray can be obtained, so that a final image can be obtained on a screen, and the basic principle of a ray projection algorithm is shown in fig. 4.
Ray projection synthesis formula: c0=AsCs+(1-As)Cd
Wherein A issIndicating the transparency of the transparent object, CsRepresenting the original color of the transparent object, CdOriginal color of the target object, C0The color value obtained by observing the target object through the transparent object.
Based on the above principle, the method using volume rendering in this embodiment achieves the effect of three-dimensional visualization of the human body structure, which can be performed outside the virtual simulation system. Specifically, a group of lung DICOM format CT images are read, image data in the lung DICOM images are extracted, the image data are preprocessed and enhanced, color values and opacity values are assigned to data points according to a preset coloring protocol, three-dimensional visualization is performed in a model according to a ray projection algorithm, the data points are led into a virtual simulation system, visualization effects are adjusted through a UI interaction interface of the virtual simulation system, the display position, the depth and the cross section of the model can be adjusted, and a schematic diagram of the visualization effect is shown in FIG. 5.
2. Point cloud three-dimensional reconstruction
After a three-dimensional model of a human body structure is introduced into a scene of a virtual simulation system, the size of the human body model accords with the proportion of the real world, in order to further approach a real human body, a real human body skin model is reconstructed by a point cloud three-dimensional reconstruction method, in order to enable the human body skin model to perfectly fit with the human body structure model, characteristic points of different parts of the human body structure and characteristic points corresponding to the skin model are obtained, and then the characteristic points are covered on the generated human body structure model, so that the reconstruction of the three-dimensional operation guide plate is facilitated, and the sense of reality of the human body structure model is improved.
The point cloud three-dimensional reconstruction firstly needs to acquire point cloud data, a depth image and a color image of an object can be acquired through a camera capable of shooting depth information, and after the image is acquired, the three-dimensional reconstruction of the object is completed through the steps of image enhancement, point cloud calculation and registration, data fusion, surface generation and the like.
When acquiring an image, it is necessary to acquire enough images in order to acquire all information of a subject, and thus the same subject should be photographed by changing angles. Because of the influence of the resolution of the device, the acquired image information always has some defects, such as depth map loss caused by smooth object surface reflection, semi/transparent objects, dark objects, out-of-range and the like, and therefore the image needs to be enhanced. After image enhancement, point cloud data may be computed from the image.
Point cloud registration is required after the point cloud data is calculated. Because multiple frames of images at different angles are shot in order to acquire all information of a scene, each frame of image contains a certain common part, in order to perform three-dimensional reconstruction by using the images, the images need to be analyzed, and transformation parameters among the frames need to be solved. The image registration is based on the common part of a scene, multi-frame images acquired at different time, angles and illumination intensities are overlapped and matched into a unified coordinate system, corresponding translation vectors and rotation matrixes are calculated, and redundant information is eliminated. In the embodiment, a method of using coarse registration and precise registration in a matching manner is adopted.
The depth information after registration is point cloud data scattered and disordered in the space, and only partial information of the scenery can be shown.
The last step is surface generation, and the purpose of the surface generation is to adopt a Poisson reconstruction algorithm to generate a surface and triangulate in order to construct a visual isosurface of an object.
Therefore, in the embodiment, the camera is used for shooting the human body at various angles, the images are subjected to point cloud generation and fusion, a surface skin model which can perfectly fit with the human body structure is reconstructed, the reconstruction and printing of the operation guide plate are facilitated, and the reality sense of the human body model is improved, as shown in fig. 6.
3. Virtual simulation system
The real operating room scene can be displayed in a UI interactive interface of the virtual simulation system, a doctor can wear VR equipment in the virtual operating room scene, the VR handle is used for carrying out virtual operation and a series of other operations in the scene, such as leading-in and leading-out of various surgical equipment models, the display effect of the model is adjusted, the model is cut, skin is generated, a guide plate is generated, and the operation simulation and training functions are adjusted.
A virtual surgical planning and navigation system (i.e., a virtual simulation system) is a surgical simulation, training, and surgical planning and navigation system that performs particle penetration in a virtual operating room. The functions comprise three-dimensional visualization of a human body structure, virtual operation scene construction, virtual reality human-computer interaction design and point cloud three-dimensional reconstruction. Through these functions, can make the doctor immerse wherein observe, rehearse and operation scheme planning, set out more reasonable, accurate operation scheme to reduce the risk that can avoid in advance in the operation, improve the success rate of operation greatly.
The virtual simulation system is used for carrying out system operation at a computer end and mainly comprises modules for three-dimensional visualization of a human body structure, man-machine interaction of a virtual scene, three-dimensional reconstruction of a model, simulation of a particle puncture operation, operation planning, navigation and the like. After the VR equipment is worn, man-machine interaction operation can be performed in a virtual environment, pathological change tissues of a patient can be watched, operation procedures or training can be simulated, operation navigation can be performed in an operation, operation schemes can be planned, and the like. After this is completed, a surgical guide may be generated based on the stored training data.
Because the software developed in the Unity3D platform generally packages model files in all scenes before the software runs in a resource package, a virtual surgery system needs to import a three-dimensional image model of a patient in real time in the running process, Unity3D does not support real-time model import, but can import a binary file in real time, so that a method in a TriLib package is used for importing the model into the system in the form of the binary file and analyzing the format of the model file so as to construct the model in the system; for exporting the model, the same method is adopted, and export Package is used to complete the export of the model file.
The particle puncture surgery can be simulated in the system, so that knowledge in the aspect of human-computer interaction needs to be learned, the currently constructed virtual surgery planning and navigation system is a virtual simulation system which is operated at a PC (personal computer) end under the condition that VR (virtual reality) equipment is worn, a doctor can perform immersion observation on the human body structure information of a patient in a virtual scene, observe and analyze the specific tissue structure of the patient, and then perform surgery operation according to a handle to interact with a surgical equipment model and the human body tissue structure so as to achieve the function of simulating the surgery. The system scenario is shown in fig. 7, and the human-computer interaction presentation is shown in fig. 8.
3.1, virtual surgery
The doctor is in the room of having built the VR environment, under the condition of wearing VR equipment, runs virtual simulation system at the PC end, and the doctor can carry out immersive observation to patient's human structure information in virtual scene to observe and the analysis to patient's specific organizational structure, then carry out the operation according to the handle, use surgical equipment model and human structure to interact, in order to reach the function of simulation operation. The particle puncture operation can be simulated in a virtual scene, and the particle puncture operation simulation, training, operation planning and navigation can be performed in a virtual operating room. The virtual scene is shown in fig. 7.
The doctor uses the handle to operate through the operation interface, modifies the human body model in the virtual scene by adjusting the depth of the light projection, so as to display different parts of the human body model, and displays the cross section of the human body model by modifying the path of the light projection, and the experimental result is shown in fig. 9.
According to the human body model, the problem that the accuracy and the safety of the operation are greatly reduced due to the shielding of bones and other tissues on the tumor can be solved. The doctor can know the detailed information of the pathological change tissue of the patient by adjusting various parameters of the model, and can visualize a specific tumor part to increase the accuracy and precision of puncture in the operation process of the doctor. Then, the doctor analyzes the position of the tumor of the patient according to the position, and under the auxiliary analysis of the system, a proper and accurate operation scheme is made.
The doctor carries out the virtual particle puncture operation in the virtual scene according to the formulated operation scheme, adopts the handle to adjust the depth displayed by the human body model, takes up the puncture needle by the handle, inserts the needle head into the human body model, carries out the multi-needle insertion and the repeated insertion of the puncture needle according to the scheme, and the experimental result is shown in figure 8.
After the insertion of the puncture needle is completed, the display depth of the phantom is adjusted by using the handle, so that the phantom displays the skin, and the experimental result is shown in fig. 10.
4. Generating surgical guide for printing
A doctor clicks a button of a three-dimensional operation guide plate generated by using a handle, a system can generate a 3D operation guide plate which is completely matched with a human body model and a puncture needle according to the inserting position, angle and depth of the human body model and the puncture needle, the effect of the 3D operation guide plate is shown in figure 11, the doctor can repeatedly simulate the particle puncture operation in a virtual scene to achieve the effects of training and improving proficiency, the particle puncture operation in reality is performed under the assistance of the generated 3D operation guide plate, a corresponding needle inlet is generated on the 3D operation guide plate according to the scheme planned by the doctor before, the angle and depth information exists, the doctor inserts the puncture needle into a cylindrical needle inlet according to the information, the operation is performed under the assistance of the 3D operation guide plate, and the accuracy and safety of the doctor operation are improved.
That is to say, after the doctor performs the simulated particle puncture surgery, the system stores relevant information in the surgery, including a surgery scheme made by the doctor, the position, the angle and the depth of the puncture needle, and after clicking a generation guide plate button in a UI interface of the virtual simulation system, the system generates a three-dimensional surgery guide plate model by adopting Mesh contour extraction, a Mesh suture algorithm and poisson reconstruction according to the stored information. In order that the three-dimensional operation guide plate can perfectly fit with a human body, the three-dimensional operation guide plate is reconstructed according to points of a skin model, points of a puncture needle inserting position and surrounding points are obtained to form a closed loop, the three-dimensional operation guide plate is generated according to the points in the closed loop, characteristic points are marked in the points to determine the position of the three-dimensional guide plate on the human body, the accuracy and the safety of the three-dimensional operation guide plate are ensured, after the three-dimensional operation guide plate is generated, a puncture needle inserting hole is needed to be generated to assist the puncture needle to be inserted, the puncture needle hole is a tubular object and is located at the position of the puncture needle, an angle is set according to the angle between the puncture needle and the skin, the inner diameter of a pipeline is the diameter of the puncture needle, and the size of the puncture needle is strictly generated according to the size. After the three-dimensional operation guide plate is generated in a virtual scene, the three-dimensional operation guide plate needs to be exported, then the 3D printing technology is used for printing the operation guide plate, the 3D guide plate printing model is constructed in the virtual reality by utilizing the advantages of the reality and the stereoscopic impression of the virtual reality, and the operation difficulty and the manufacturing time can be reduced compared with the traditional direct 3D printing guide plate model. For navigation and assistance in real surgery.
5. Poisson reconstruction algorithm
In order to generate skin information in the virtual simulation system, in this embodiment, skin surface vertex data is obtained by adjusting a volume rendering threshold, and a skin area is marked in the Unity engine, so that a skin area point set of a to-be-operated site is taken out, and the skin information is generated by using a poisson reconstruction algorithm.
Skin surface vertex data is obtained through adjusting a volume drawing threshold value, and a skin area is marked in a Unity engine, so that a skin area point set of a to-be-operated part is taken out, and the generation is carried out by utilizing a Poisson reconstruction algorithm.
The core idea of the poisson reconstruction algorithm is that the point cloud represents the surface position of an object, the normal vector of the point cloud represents the inside and outside directions, a smooth object surface estimation can be given by implicitly fitting an indication function derived from the object, and the steps of poisson reconstruction implementation are shown in fig. 12.
The key to poisson reconstruction is to estimate the surface indicative function of the model and extract the iso-surface, thereby generating a mesh model for water tightness. In the aspect of the indicating function, since the indicating function of the surface hardly changes in any region, that is, is substantially a constant, the point cloud data has a certain relation with the indicating function of the model surface, so that the gradient of the indicating function is almost zero in other regions except for the points of the model surface, and for the points of the surface, the gradient of the indicating function is consistent with the normal vector of the points of the model surface. The normal vector of the point cloud can therefore be regarded as a set indicating the gradient of the function. In the aspect of extracting the isosurface, a Dualcontouring algorithm, which is called DC algorithm for short, is adopted. The DC algorithm utilizes Hermite data (the position and normal direction of an intersection point) to construct an isosurface, and the specific algorithm comprises two steps:
the first step is as follows: generating vertex coordinates using a quadratic error function, each voxel unit intersecting the iso-surface, generating a vertex coordinate by minimizing the quadratic error function:
Figure BDA0002966184180000171
where pi is the location of the intersection and ni is the normal to the intersection.
The error function is written in matrix form as follows:
E(x)=(Ax-b)T(Ax-b)
=xTATAx-2xTATb+bTb
the second step is that: and generating a mesh surface patch, wherein for each voxel edge intersected by the isosurface, the 4 adjacent voxel units containing the voxel edge necessarily have vertexes, and the 4 vertexes are connected to generate 1 quadrilateral surface patch. The poisson reconstruction effect shown in figure 13.
6. Mesh contour extraction algorithm
In the embodiment, a Mesh contour extraction algorithm is used in the generation of the three-dimensional surgical guide plate, namely, the rendering of a data structure based on a Mesh grid in a Unity engine of a virtual simulation system is realized, the data structure is composed of three parts, namely a vertex coordinate, a triangle relation array and a normal vector array, through analysis, only one side of the outer contour is not shared, and other sides are shared by two triangles, so that the outer contour can be obtained by screening the sides which appear only once and sequencing the sides, as shown in fig. 14.
Before the three-dimensional surgical guide plate is generated, firstly, the insertion position of the puncture needle needs to be acquired, after the position of the puncture needle is acquired, the virtual simulation system automatically acquires corresponding points on the skin, the points are stored in a file, the system acquires an area which can contain all the puncture needles according to the points by adopting a contour extraction method, and the surgical guide plate is generated as a reference at one time.
The contour extraction steps are as follows:
(1) extracting the contour: based on the storage rule of the Mesh in Unity, the Mesh is composed of verticals, triangles and normals, each shared edge in the triangular Mesh is stored twice, the triangular relation array triangles is traversed, and the vertexes on the boundary are screened out and recorded.
(2) Replication and stitching of the mesh: the skin model has only one layer of points, so the mesh needs to be copied and filled to form a complete surgical splint, and then the two meshes are sutured by adopting a triangular splicing method.
By rendering the extracted contours in Unity, the extracted contours can wrap the edges completely, as shown in the effect diagram of FIG. 15.
(3) Adding needle channels: after the guide plate is generated, a pipeline into which a puncture needle enters, namely a needle inlet hole, is added to the guide plate, a corresponding needle inlet hole is generated by utilizing the position and the angle of the puncture needle obtained from the system and the size of the puncture needle, then grid combination operation is carried out in a Boolean operation mode, the grid combination operation is sutured, and direct completion can be achieved by adopting an inserter BooleanRT. Namely: the repeated mesh is removed and the non-intersecting mesh is added above.
7. Mesh stitching algorithm
After the extraction of the inner surface mesh is completed, only one surface is provided, in order to form a film with thickness, all vertexes of the inner surface mesh need to be translated to form the outer surface mesh, if all vertexes are translated along the same direction, the condition that the thickness of the film is not uniform occurs, the thickness D _1< D _2 is obvious, and edges are easy to form a very small sharp angle (such as an angle theta in fig. 16), and the 3D printing is not facilitated. FIG. 16 shows a schematic view of vertex translation.
The translation mode selected in this embodiment is to translate each vertex along its own normal vector, and the coordinate and normal vector calculation formula of the new vertex after translation is as follows:
V′i=Vi+Ni·C
N′i=Ni
wherein ViAnd NiThe scores are respectively the grid vertex coordinates and normal vector, V'iAnd N'iThe coordinates and normal vector of the new mesh vertices are shown, respectively, and C is the film thickness. The schematic diagram of the position of the correction repetition point is shown in fig. 17:
let V1And V2For two repeat points, the corresponding normal vector is N1,N2Then V is1=V2And N is1≠N2N ' is a new vertex V ' after the unit normal vector translation of the repeating point translation direction '1And V'2Calculated by the following formula:
Figure BDA0002966184180000191
Figure BDA0002966184180000192
thus, the inner surface mesh and the outer surface mesh of the film are obtained, finally, the normal vector of the vertex of the inner surface mesh needs to be inverted, and the edges of the two surfaces need to be sewn, so that the whole film becomes a closed model to meet the necessary conditions of 3D printing. The cutting points form the edges of the surfaces of the two layers of grids, because the outer surface is obtained by translating the vertex of the inner surface, the points of the edge of the outer surface are also obtained by translating the cutting points in the inner surface, the cutting points of the edge of the inner and outer layers are in one-to-one correspondence, and the cutting points are connected into the triangular surface one by one.
On the basis of extracting the boundary of the Mesh, the Mesh is implemented by a triangular Mesh filling method according to the corresponding connection of the vertexes on the two Mesh contour edges, as shown in fig. 18. The surgical guide can be sutured into a closed three-dimensional model by the triangular connection, and the final suturing result is shown in fig. 19.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the claims should be construed to include preferred embodiments and all changes and modifications that fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention should also include such modifications and variations.

Claims (10)

1. A puncture surgery simulation method based on a virtual reality technology is characterized by comprising the following steps:
s1, constructing a three-dimensional human body model in the virtual scene based on the specified DICOM image, wherein the human body model is a model with the same proportion as a real human body;
s2, receiving and executing an operation instruction of an operator on the human body model in the virtual simulation system by means of auxiliary equipment of the virtual simulation system, so as to realize simulation of the particle puncture operation based on the human body model;
and S3, acquiring the dynamic position information of the puncture needle in the particle puncture operation for evaluation according to the executed operation command track.
2. The method of claim 1, further comprising:
s4, after receiving the instruction of generating the surgical guide plate, generating the surgical guide plate carrying partial track of the puncture needle according to the dynamic position information of the puncture needle and the associated information of the human body model to which each position information belongs;
printing the surgical guide plate based on a 3D printing technology;
or,
after receiving an instruction for generating the surgical guide plate, generating the surgical guide plate carrying partial tracks of the puncture needle according to the dynamic position information of the puncture needle and the associated information of the human body model to which each position information belongs, and automatically generating a metering planning report;
printing the operation guide plate based on the 3D printing technology.
3. The method of claim 1, wherein S1 includes:
s11, constructing a visualized part of the human body structure in a volume rendering mode based on the size and the interlayer spacing of the appointed DICOM image, wherein the visualized part comprises a cross-sectional view used for cutting and displaying in a multi-angle mode, the DICOM image comprises a CT image in a DICOM format, and the visualized part of the human body structure is a model with the same proportion as the human body in the CT image;
s12, acquiring a multi-angle human skin image, preprocessing a depth image and a color image of the human skin image, completing three-dimensional reconstruction of skin in a visualized part of a human structure through image enhancement, point cloud calculation and registration, data fusion and surface generation processes, and acquiring a skin model of an omnibearing surface skin to cover the visualized part as a three-dimensional human model;
or,
s11a, constructing a visualized part of the human body structure in a volume rendering mode based on the size and the interlayer spacing of the specified DICOM image, wherein the visualized part comprises a cross-sectional view used for cutting and displaying in a multi-angle mode, the DICOM image comprises a CT image in a DICOM format, the visualized part of the human body structure is a model with the same proportion as the human body in the CT image, the visualized part comprises a pathological region of human body tissues, and the visualized part is used as a human body model.
4. The method of claim 3, wherein S11 or S11a comprises:
based on the appointed DICOM image, extracting the data of the DICOM image, preprocessing the extracted data, coloring each preprocessed data by adopting coloring logic, performing three-dimensional visualization processing on the colored data according to a ray projection algorithm, and selecting a human body structure visualization part in the three-dimensional visualization processing by means of human body parameters corresponding to an adjusting algorithm.
5. The method according to any one of claims 1 to 4, wherein S2 includes:
s21, importing the human body model into a virtual simulation system of a Unity3D platform in a binary file mode by adopting a TriLib package mode, and analyzing the human body model so as to construct the human body model in the virtual simulation system;
s22, performing operation on the human body model in the virtual simulation system by means of VR auxiliary equipment, so that the operation equipment model in the virtual simulation system is interacted with the human body model, and the particle puncture operation based on the human body model simulation is realized.
6. The method according to claim 5, wherein the S22 includes:
adjusting the depth and/or path length of the light transmission by means of the handle to modify the phantom in the virtual simulation system or to display different parts of the phantom;
or,
modifying the path of light transmission by means of the handle to display a cross-section of the manikin;
or,
inputting parameters for adjusting a human body model to adjust the human body model, acquiring information of corresponding pathological change tissues in the human body model and visually positioning;
or,
the puncture needle in the surgical equipment model is operated by means of the handle to be inserted into the manikin, and the depth displayed by the manikin is adjusted to simulate the insertion position information of the puncture needle with skin and without skin of the manikin.
7. The method of claim 5, wherein S21 further comprises:
if the human body model constructed in the virtual simulation system does not have skin information, receiving an instruction for increasing the skin information triggered by a user, acquiring skin surface vertex data through adjusting a volume drawing threshold value according to a pre-acquired multi-angle human body skin image, marking a skin area needing skin increase in the human body model in a Unity engine to determine a skin point set of an area to be operated, and completing reconstruction based on a Poisson reconstruction algorithm.
8. The method of claim 2, wherein S4 includes:
if the puncture operation in the virtual simulation system is simulated, receiving an instruction for generating an operation guide plate triggered by a user, acquiring various information of the puncture needle and a corresponding skin range, and generating the operation guide plate for printing by adopting a Poisson reconstruction algorithm, a contour extraction algorithm and a grid stitching algorithm;
the operation guide plate corresponds to a part needing operation in a human body, and the angle, the length and the depth of the puncture needle and the information of the needle inlet hole are displayed on the operation guide plate.
9. The method according to claim 8, wherein S4 specifically includes:
acquiring a skin range of a preset range passed by the puncture needle according to the acquired dynamic position information of the puncture needle, and screening a boundary outline for generating the operation guide plate according to the dynamic position information and a storage rule of a grid to which the skin range belongs;
aiming at the screened boundary contour, a triangular splicing method is adopted for stitching and adding the angle, the length, the depth and the information of a needle inlet hole of the puncture needle;
the repeated grid is removed, and a surgical guide plate for printing is generated.
10. A computing device, comprising: a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program stored in the memory, in particular, executes the virtual reality technology-based puncture surgery simulation method according to any one of claims 1 to 9.
CN202110251289.4A 2021-03-08 2021-03-08 Puncture surgery simulation method based on virtual reality technology Pending CN113017832A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110251289.4A CN113017832A (en) 2021-03-08 2021-03-08 Puncture surgery simulation method based on virtual reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110251289.4A CN113017832A (en) 2021-03-08 2021-03-08 Puncture surgery simulation method based on virtual reality technology

Publications (1)

Publication Number Publication Date
CN113017832A true CN113017832A (en) 2021-06-25

Family

ID=76466805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110251289.4A Pending CN113017832A (en) 2021-03-08 2021-03-08 Puncture surgery simulation method based on virtual reality technology

Country Status (1)

Country Link
CN (1) CN113017832A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113823385A (en) * 2021-09-03 2021-12-21 青岛海信医疗设备股份有限公司 Method, device, equipment and medium for modifying DICOM image
CN113730715B (en) * 2021-10-15 2023-10-03 核工业总医院 Remote anesthesia auxiliary control method and device, electronic equipment and storage medium
CN116898572A (en) * 2023-07-11 2023-10-20 上海医视际医疗科技发展有限公司 Cerebral hemorrhage puncture path setting method and system based on real-time traceable object
CN116935009A (en) * 2023-09-19 2023-10-24 中南大学 Operation navigation system for prediction based on historical data analysis

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102895031A (en) * 2012-09-19 2013-01-30 深圳市旭东数字医学影像技术有限公司 Kidney virtual surgical method and system
US20130211531A1 (en) * 2001-05-25 2013-08-15 Conformis, Inc. Patient-adapted and improved articular implants, designs and related guide tools
CN106821496A (en) * 2016-12-28 2017-06-13 妙智科技(深圳)有限公司 A kind of accurate planning system of percutaneous foramen intervertebrale lens operation and method
CN109247976A (en) * 2018-08-09 2019-01-22 厦门强本宇康科技有限公司 A kind of surgical guide producing device and production method based on three-dimensional modeling
CN110738729A (en) * 2019-10-11 2020-01-31 深圳市一图智能科技有限公司 puncture guide plate three-dimensional model generation method, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130211531A1 (en) * 2001-05-25 2013-08-15 Conformis, Inc. Patient-adapted and improved articular implants, designs and related guide tools
CN102895031A (en) * 2012-09-19 2013-01-30 深圳市旭东数字医学影像技术有限公司 Kidney virtual surgical method and system
CN106821496A (en) * 2016-12-28 2017-06-13 妙智科技(深圳)有限公司 A kind of accurate planning system of percutaneous foramen intervertebrale lens operation and method
CN109247976A (en) * 2018-08-09 2019-01-22 厦门强本宇康科技有限公司 A kind of surgical guide producing device and production method based on three-dimensional modeling
CN110738729A (en) * 2019-10-11 2020-01-31 深圳市一图智能科技有限公司 puncture guide plate three-dimensional model generation method, computer equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113823385A (en) * 2021-09-03 2021-12-21 青岛海信医疗设备股份有限公司 Method, device, equipment and medium for modifying DICOM image
CN113730715B (en) * 2021-10-15 2023-10-03 核工业总医院 Remote anesthesia auxiliary control method and device, electronic equipment and storage medium
CN116898572A (en) * 2023-07-11 2023-10-20 上海医视际医疗科技发展有限公司 Cerebral hemorrhage puncture path setting method and system based on real-time traceable object
CN116898572B (en) * 2023-07-11 2024-01-30 上海医视际医疗科技发展有限公司 Cerebral hemorrhage puncture path setting method and system based on real-time traceable object
CN116935009A (en) * 2023-09-19 2023-10-24 中南大学 Operation navigation system for prediction based on historical data analysis
CN116935009B (en) * 2023-09-19 2023-12-22 中南大学 Operation navigation system for prediction based on historical data analysis

Similar Documents

Publication Publication Date Title
CN113017832A (en) Puncture surgery simulation method based on virtual reality technology
US10942586B1 (en) Interactive 3D cursor for use in medical imaging
CN105992996B (en) Dynamic and interactive navigation in surgical environment
CN107067398B (en) Completion method and device for missing blood vessels in three-dimensional medical model
DE19543410A1 (en) Non=invasive method for investigation of body cavities
CN107924580A (en) The visualization of surface volume mixing module in medical imaging
CN109157284A (en) A kind of brain tumor medical image three-dimensional reconstruction shows exchange method and system
CN106806021A (en) A kind of VR surgery simulation systems and method based on human organ 3D models
Bornik et al. Integrated computer-aided forensic case analysis, presentation, and documentation based on multimodal 3D data
US11995786B2 (en) Interactive image editing
CN113645896A (en) System for surgical planning, surgical navigation and imaging
CN106887044A (en) Three-dimensional entity model construction method and device based on several tomoscan images
CN110993067A (en) Medical image labeling system
CN114913309A (en) High-simulation surgical operation teaching system and method based on mixed reality
TW202207242A (en) System and method for augmented reality spine surgery
CN109872395B (en) X-ray image simulation method based on patch model
Tan et al. Multi-needle particle implantation computer assisted surgery based on virtual reality
Chen et al. A system design for virtual reality visualization of medical image
Reddivari et al. Vrvisu++: A tool for virtual reality-based visualization of mri images
CN117316393B (en) Method, apparatus, device, medium and program product for precision adjustment
CN102592060A (en) Method for guiding equipment to process images by means of ablation treatment images
EP4231246A1 (en) Technique for optical guidance during a surgical procedure
CN102737158A (en) Ablation treatment image booting equipment with three-dimensional image processing device
Su et al. The development of a VR-based treatment planning system for oncology
Ra et al. Visually guided spine biopsy simulator with force feedback

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210625