CN113889277A - Operation experience system based on VR technique - Google Patents

Operation experience system based on VR technique Download PDF

Info

Publication number
CN113889277A
CN113889277A CN202111162209.4A CN202111162209A CN113889277A CN 113889277 A CN113889277 A CN 113889277A CN 202111162209 A CN202111162209 A CN 202111162209A CN 113889277 A CN113889277 A CN 113889277A
Authority
CN
China
Prior art keywords
scene
unit
experiencer
equipment
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111162209.4A
Other languages
Chinese (zh)
Inventor
谢彩霞
廖凤琴
陈仲斌
贾平
包安竹
张佳
张蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Peoples Hospital of Sichuan Academy of Medical Sciences
Original Assignee
Sichuan Peoples Hospital of Sichuan Academy of Medical Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Peoples Hospital of Sichuan Academy of Medical Sciences filed Critical Sichuan Peoples Hospital of Sichuan Academy of Medical Sciences
Priority to CN202111162209.4A priority Critical patent/CN113889277A/en
Publication of CN113889277A publication Critical patent/CN113889277A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention discloses a surgery experience system based on VR technology. The virtual reality system comprises VR equipment, the VR equipment comprises interaction equipment, display equipment and computer processing equipment, the interaction equipment is used for acquiring feedback data of an experiencer, the display equipment is used for displaying a VR scene, the computer processing equipment is used for processing the data, the virtual reality system further comprises a three-dimensional reconstruction module used for constructing a 3D model and forming the VR scene, a surgery critical flow interaction module based on the VR scene and forming a plurality of virtual scenes through the VR equipment, a visual rendering module used for realizing an entity object module and a coordinate system in the virtual scene and rendering a global visual angle and a local visual angle and a tactile rendering module used for simulating tactile perception in the virtual scene. The surgical operation monitoring device has the technical effect of being capable of informing the surgical operation content in advance.

Description

Operation experience system based on VR technique
Technical Field
The invention relates to the technical field of VR (virtual reality), in particular to a surgery experience system based on VR technology.
Background
Virtual reality technology is a computer simulation technology capable of creating and experiencing a virtual world, and utilizes a computer to generate an interactive three-dimensional dynamic scene, the simulation system of the physical behavior of which enables a user to be immersed in the environment. In the medical field, the virtual reality technology is mainly applied to establishing a virtual human body model, so that medical students can more easily know the internal organ structure of the human body, the medical students can conveniently perform operation in a virtual environment, the training cost of the medical students is reduced, and the practical exercise capacity of the medical students is improved.
At present, for general anesthesia or semi-anesthesia operation, a patient can experience an anesthesia period with a larger somatosensory difference with a conventional anesthesia, due to the fact that the patient lacks understanding of an operation flow and an operation environment before an operation, fear and anxiety can be easily generated before the operation, feedback of information in the operation can be easily influenced due to tension in the operation, operation risks are increased, meanwhile, tension and anxiety can be easily increased due to strangeness of the somatosensory feeling in the recovery period after the operation, and postoperative recovery is influenced, however, the virtual reality technology is not applied to somatosensory pre-notification of the patient in the anesthesia operation at present.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, enable an experiencer to experience the operation process in advance, and provide an operation experience system based on VR technology.
In order to achieve the above purpose, the invention provides the following technical scheme:
a surgical experience system based on VR technology, comprising a VR device including an interaction device for acquiring feedback data of an experiencer, a display device for displaying a VR scene, and a computer processing device for processing the data, characterized by a three-dimensional reconstruction module, a surgical critical flow interaction module, a visual rendering module, and a haptic rendering module, wherein,
the three-dimensional reconstruction module is used for constructing a 3D model and forming a VR scene; projecting the VR scene to display equipment;
the operation key process interaction module is used for forming a plurality of virtual scenes through VR equipment based on the VR scenes and interacting with the experiencers through the VR scenes and the interaction equipment;
the visual rendering module is used for realizing an entity object module and a coordinate system in a virtual scene and rendering a global view angle and a local view angle, wherein the global view angle comprises a free moving view angle and a head moving view angle;
and the tactile rendering module is used for simulating tactile perception in the virtual scene.
Through adopting above-mentioned technical scheme, adopt the VR technique, real environment, emulation operation flow, the virtual operation experience person's of simulation operation in-process perception and experience carry out human-computer interaction with experience person, make experience person's complete experience travel of operation, help the science to popularize operation relevant knowledge, eliminate the excessive fear and anxiety of people to the operation, reach the effect of forenotice operation content.
Optionally, the three-dimensional reconstruction module comprises an original drawing unit, a low-modulus manufacturing unit, a low-modulus engraving unit, a high-modulus topological unit, a low-modulus expanding unit, a baking normal mapping unit, a mapping drawing unit, an adjusting unit and a decoding unit;
the original drawing unit collects reference objects required by manufacturing the 3D model by photographing or original drawing;
the low-mould manufacturing unit is used for manufacturing a low mould based on a reference object by adopting 3Dmax or Maya to generate a primary mould;
the low-die engraving unit adopts ZBursh 4R8 to rewire the initial die so as to increase details and engrave to form a high die;
the high-mode topological unit is used for changing the high-mode topology formed by carving into a low mode;
the low-mode unfolding unit is used for unfolding the UV of the low mode to obtain a UV map;
the baking normal mapping unit is used for guiding the high mold and the low mold into the UV mapping and baking the normal mapping;
the map drawing unit is used for diffusely reflecting the normal map, comparing the normal map and drawing a mixed map;
the adjusting unit forms a low model based on the normal mapping, the color mapping and the mixed mapping, and performs effect adjustment on the low model to obtain a 3D model;
and the decoding unit is used for decoding the 3D model by using the computer processing equipment and projecting the 3D model onto the display equipment.
Optionally, the VR system further comprises a visual angle switching unit, wherein an experiencer visual angle and a third person-called visual angle are arranged in the VR scene, and the scene switching unit is used for switching between the experiencer visual angle and the third person-called visual angle.
Optionally, the virtual scene includes a preoperative three-party verification scene, a coordination anesthesia scene, an operation scene and an anesthesia resuscitation scene; wherein the content of the first and second substances,
the preoperative three-party checking scene is constructed with an operating room scene, an experiencer confirms personal information at a first person viewing angle, and when confirming the personal information, a voice question of a doctor is played and answer voice of the experiencer is received;
in the matched anesthesia scene, the experiencer is at a third person called visual angle, the process of establishing a venous channel and a tracheal cannula by a doctor is observed, and voice information and panel information are output to remind the experiencer of matched anesthesia;
in the operation scene, the experiencer switches to a first person viewing angle, a black screen is displayed, and the experiencer is prompted to enter an operation stage;
according to the anesthesia resuscitation scene, an experiencer constructs a resuscitation room scene at a first-person viewing angle, and the experiencer enters resuscitation experience.
Optionally, the haptic rendering module comprises a force feedback device, a force feedback driving unit, a force feedback device interface position tracking unit and a control unit;
the force feedback equipment is a VR helmet handle and is used for performing touch interaction with an experiencer and acquiring operation of the experiencer in a VR scene;
the force feedback driving unit is used for acquiring the operation of an experiencer on the VR helmet handle;
the force feedback device interface position tracking unit is used for connecting the VR helmet handle and the computer processing device and capturing the position of the force feedback device in a VR scene;
the control unit is used for controlling the force feedback equipment.
Optionally, the display device comprises a binocular display screen and an external display screen,
the binocular display screen is used for providing VR visual angles for the experiencers;
and the external display screen is used for synchronously displaying the visual angle of the experiencer outside the VR equipment.
Optionally, the visual rendering module adopts a qssplat-based point cloud model visual rendering algorithm.
Compared with the prior art, the invention has the beneficial effects that: adopt VR technique, real environment, emulation operation flow, the perception and the experience of virtual operation experience person among the simulation operation process carry out human-computer interaction with experience person, make experience person's complete experience travel of operation, help the science to popularize the relevant knowledge of operation, eliminate the excessive fear and anxiety of people to the operation, reach the effect of forenotice operation content.
Description of the drawings:
FIG. 1 is a schematic view of a first perspective of an experiencer in an operating room according to an embodiment of the present invention;
FIG. 2 is a schematic view of a third perspective of an experiencer in an operating room according to an embodiment of the present invention;
FIG. 3 is a schematic view of a resuscitation room according to an embodiment of the present invention;
fig. 4 is a diagram of a virtual scene quality reference effect according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to test examples and specific embodiments. It should be understood that the scope of the above-described subject matter is not limited to the following examples, and any techniques implemented based on the disclosure of the present invention are within the scope of the present invention.
Example 1
A VR technology-based surgery experience system comprises VR equipment, wherein the VR equipment comprises interaction equipment, display equipment and computer processing equipment, the interaction equipment is used for interacting with an experiencer, the display equipment is used for displaying VR scenes, and the computer processing equipment is used for processing data; in this embodiment, the VR device adopts a VR all-in-one machine with model number PCIO neo2, which is provided with the above-mentioned interaction device, display device and computer processing device.
The display equipment comprises a binocular display screen and an external display screen, the binocular display screen is used for providing VR visual angles for the experiencers, and the binocular display screen is self-contained equipment of the VR equipment and does not need to be additionally arranged; the external display screen is used for synchronously displaying the visual angle of an experiencer outside the VR equipment, the external display screen is directly connected with the computer processing equipment, and in the experience process of the experiencer, a worker can observe a scene experienced by the experiencer in the VR equipment through the external display screen and only adopts a non-VR format; the experience progress of the experiencer in the virtual scene can be obtained in time.
The system also comprises a three-dimensional reconstruction module, a surgical key flow interaction module, a Visual rendering module and a tactile rendering module, which are software modules developed and completed by combining Open GL language under the development environment based on a Windows7 operating system platform and Microsoft Visual C + + 2010.
The three-dimensional reconstruction module is used for constructing a 3D model, forming a VR scene and projecting the VR scene to display equipment; the modeling software adopts 3DMAX, uses the 3DMAX to generate a three-dimensional geometric model of an object, completes the main steps of storage, reading, decoding, displaying and the like through a software system, and projects the generated VR scene to a VR helmet display screen.
In the process of three-dimensional reconstruction, the editable polygon object comprises 5 levels of vertexes, edges, boundaries, polygons and faces, and the vertexes, edges, boundaries, polygons and faces can be adjusted respectively.
The three-dimensional reconstruction module comprises an original picture unit, a low-mode manufacturing unit, a low-mode carving unit, a high-mode topological unit, a low-mode unfolding unit, a baking normal mapping unit, a mapping drawing unit, an adjusting unit and a decoding unit;
the original drawing unit collects reference objects required by manufacturing the 3D model by photographing or original drawing;
a low-mold making unit which adopts 3Dmax or Maya, makes a low mold based on a reference object, and generates a primary mold; when 3Dmax is used, a basic object is created, and is spliced together through basic models such as cuboids, cylinders and the like, and the basic models are converted into editable polygons to adjust and splice the point-line surface, so that the finished initial model can be obtained.
And the low-mould carving unit is used for guiding the primary mould into ZBursh 4R8, rewiring, adding details, then carving, observing the model from multiple angles based on a reference picture during carving, completing the carving process in a grading manner, and making details after making large moulds.
The topological unit of the high mode, after carving out the high mode with ZBursh 4R6, the face number of the high mode may be as high as several tens of thousands of faces or more, such model is unable to be imported into the development engine, must be topological out a low mode for importing into the development engine; in this embodiment, the low modulus is generated using software of a special topology such as Topo Gun or using the automatic topology function of ZBrush 4R6 itself.
And the low-mode unfolding unit is used for unfolding the low mode by using the high-mode topology, the low mode needs an unfolded UV (ultraviolet) map to bake the normal map, and 3ds Max or Maya software can be used for unfolding the UV, or special UV unfolding software can be used for unfolding the UV. The normal map is a special map form, and records the difference information between the high surface number model and the low surface number model. The surface information containing many details can create many special stereoscopic visual effects on flat and non-odd object shapes.
And a baking normal map unit, wherein the baking normal map is the rendering calculation of the normal map by three-dimensional model making software, and various special three-dimensional effects of the surface of the object are created. The normal map is a special map form, and records the difference information between the high surface number model and the low surface number model. The surface information containing many details can create many special stereoscopic visual effects on the object appearance.
The unit can bake a normal map by introducing a high model with tens of thousands of surfaces and a developed low model by using xNarmaldfdrt 3, the map contains details with tens of thousands of surfaces, and the map can be pasted on a 3D model with only thousands of surfaces to have details with tens of thousands of surfaces.
The mapping drawing unit is used for diffusely reflecting the mapping and rendering the UV template; the unit adopts a comparison normal map, and then uses a Substance Painter to compare and draw; the ratio of the resolution of the map to the normal map is set to 2: 1, the concave-convex feeling of holding can be better.
And the adjusting unit is used for forming a low model based on the normal map, the color map and the mixed map, and performing effect adjustment on the low model to obtain the 3D model. And (3) deriving a color, a normal and a UV (ultraviolet) chartlet from the Substance Painter, mixing the three chartlets, deriving a final low mode by using 3D max, and entering unity software to perform final effect adjustment to obtain a final 3D model. The normal map, the color map and the mixed map refer to pictures of three kinds of recorded object surface information derived by three-dimensional production software, wherein the color map records color information, the normal map records surface concave-convex details, and the mixed map records color mixing information.
A decoding unit for decoding the 3D model using a computer processing device in the VR device for projection onto a display device.
And the visual rendering module is used for realizing an entity object module and a coordinate system in the virtual scene and rendering a global visual angle and a local visual angle, wherein the global visual angle comprises a free moving visual angle and a head moving visual angle. Global and local views refer to fig. 1 and 2; in this embodiment, the global perspective is mainly the first-person perspective of the experiencer in the virtual scene, and the local perspective is mainly the third-person perspective of the experiencer in the virtual scene.
In the visual rendering module, a point cloud model visual rendering algorithm based on Qsplat is adopted for rendering the global visual angle and the local visual angle. The Qsplat point cloud model visual rendering algorithm design allows hardware to be used for accelerating drawing, can draw models of tens of thousands of points in real time, has a highly compressed data structure, and can accelerate the drawing speed of a graph.
The touch rendering module is used for simulating touch perception in a virtual scene and comprises force feedback equipment, a force feedback driving unit, a force feedback equipment interface position tracking unit and a control unit;
the force feedback equipment is a VR helmet handle and is used for performing touch interaction with an experiencer and acquiring the operation of the experiencer in a VR scene; it utilizes a developer document SDK provided by the helmet to import the SDK in a development engine.
The force feedback device software driving module is arranged on the development engine and used for acquiring the operation of an experiencer on the VR helmet handle; in the embodiment, a development engine of the PCIO neo2 is adopted, during development, a Project tab is entered, Assets > PicoMobileSDK > Pvr _ UnitySDK > Prefabs are sequentially expanded, a Pvr _ UnitySDK preform is dragged and placed into a scene, Positon and Rotation of a Transform component of the preform are both set to (0, 0, 0), and a mainCamera in the scene is placed under the Pvr _ UnitySDK > Head.
The force feedback device interface position tracking unit is used for connecting the VR helmet handle and the computer processing device and capturing the position of the force feedback device in a VR scene; the control unit is used for controlling the force feedback device. During development, entering a Project tab, and sequentially expanding Assets, PicoMobileSDK, Pvr _ Controller and Prefabs; dragging and dropping a ControllerManager prefabricated body into a scene during development; dragging and dropping the Goblin _ Controller preform into a scene Pvr _ UnitySDK; MainCamera in the scene is placed under Pvr _ UnitySDK > Goblin _ Controller.
The virtual reality scene switching device is characterized by further comprising a visual angle switching unit, wherein an experiencer visual angle and a third person name visual angle are arranged in the VR scene, and the scene switching unit is used for switching between the experiencer visual angle and the third person name visual angle. The method is used for switching the visual angle of an experiencer in the operation key flow interaction module, and gives the experiencer more comprehensive operation experience.
The operation key process interaction module forms a plurality of virtual scenes through VR equipment based on a VR scene, interacts with an experiencer through the VR scene, and acquires feedback data of the experiencer through interaction equipment, such as operation, visual angle steering or voice in the VR scene.
The virtual scene comprises a preoperative three-party checking scene, a coordination anesthesia scene, an operation scene and an anesthesia resuscitation scene; when the experiencer experiences, the experiencer experiences according to the time lines of the preoperative three-party checking scene, the matched anesthesia scene, the operation scene and the anesthesia resuscitation scene to form a complete operation process.
After the virtual scene is constructed, the construction of the resuscitation room scene is shown in fig. 3, and the virtual scene quality reference effect is shown in fig. 4.
The operating room scene is built in the preoperative three-party checking scene, the experience person enters the main interface after opening VR software, the experience person enters the flow by clicking a UI button, the experience person defaults to a scene which is laid on a sickbed at a first visual angle in a special channel, and basic guidance of a user is completed in the special channel. Pop up UI box, guide user to use trigger key, hold key sideways, trigger button with ray, and make position move, and accomplish mental construction from ward to operating room, experience person is followed by the round nurse to advance the operating room in first visual angle.
After the experiencer enters an operating room scene by the first person, the operator confirms the personal information of the experiencer: the front of the experiencer is provided with a prompt panel for displaying the question of the operating doctor, the content of the information panel is played in a voice mode, the experiencer answers the question of the operating doctor in a voice mode, and after the experiencer detects the answer, the panel prompts that the preoperative three-way verification is successful. (if the voice answer of the experiencer is not detected, after waiting for 5 seconds, the information panel prompts that the preoperative three-party information is confirmed to be finished and the anesthesia is matched) and after the experiencer is successfully verified, the panel prompt appears: is matched with anesthesia.
After entering into the cooperation anesthesia scene, the experiencer switches into the third person and calls the visual angle to observe, and panel prompt box appears in the front of the experience person afterwards: cooperate the doctor to establish vein passageway, the highlight appears in vein region afterwards, and experience person's handle triggers the highlight then the panel suggestion appears: venous access has been established, please cooperate doctor's trachea cannula, the oral cavity region appears the highlight afterwards, and the panel suggestion appears if the experiencer handle triggers the highlight: the tracheal cannula is established, anesthesia takes effect, and the patient feels no pain in the operation process like sleeping.
Get into the operation scene after, experience person switches back first person's visual angle, the display element is black, and the panel prompt dialog box appears: please lie on the patient bed flat to cooperate with the operation; after the experiencer clicks the UI confirmation button, continuing the next step, and prompting by a panel prompt box in front of the experiencer: in the operation process, the experiencer can experience the pain in the operation process as the experiencer sleeps;
the music of broadcast 10 seconds this moment, the music stops the back, and the screen slowly becomes bright, and the experience person enters into the resuscitation room scene with first person's visual angle, enters into anesthesia resuscitation scene, the pronunciation that the ear limit speech play doctor prerecorded simultaneously: the surgery was very successful.
Through adopting the VR technique, real environment, emulation operation flow, the perception and the experience of virtual operation experience person among the simulation operation process carry out human-computer interaction with experience person in VR equipment, make experience person experience the travel of operation completely, help the hospital science to popularize operation relevant knowledge, eliminate the excessive fear and anxiety of people to the operation, reach the effect of informing the operation content in advance to reduce doctor's work, impel the popularization of operation relevant knowledge.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (7)

1. A surgical experience system based on VR technology, comprising a VR device including an interaction device for acquiring feedback data of an experiencer, a display device for displaying a VR scene, and a computer processing device for processing the data, characterized by a three-dimensional reconstruction module, a surgical critical flow interaction module, a visual rendering module, and a haptic rendering module, wherein,
the three-dimensional reconstruction module is used for constructing a 3D model and forming a VR scene; projecting the VR scene to display equipment;
the operation key process interaction module is used for forming a plurality of virtual scenes through VR equipment based on the VR scenes and interacting with the experiencers through the VR scenes and the interaction equipment;
the visual rendering module is used for realizing an entity object module and a coordinate system in a virtual scene and rendering a global view angle and a local view angle, wherein the global view angle comprises a free moving view angle and a head moving view angle;
and the tactile rendering module is used for simulating tactile perception in the virtual scene.
2. The VR-technology-based surgical experience system of claim 1, wherein the three-dimensional reconstruction module includes a raw drawing unit, a low-mode making unit, a low-mode carving unit, a high-mode topology unit, a low-mode unfolding unit, a baking normal mapping unit, a mapping unit, an adjusting unit and a decoding unit;
the original drawing unit collects reference objects required by manufacturing the 3D model by photographing or original drawing;
the low-mould manufacturing unit is used for manufacturing a low mould based on a reference object by adopting 3Dmax or Maya to generate a primary mould;
the low-die engraving unit adopts ZBursh 4R8 to rewire the initial die so as to increase details and engrave to form a high die;
the high-mode topological unit is used for changing the high-mode topology formed by carving into a low mode;
the low-mode unfolding unit is used for unfolding the UV of the low mode to obtain a UV map;
the baking normal mapping unit is used for guiding the high mold and the low mold into the UV mapping and baking the normal mapping;
the map drawing unit is used for diffusely reflecting the normal map, comparing the normal map and drawing a mixed map;
the adjusting unit forms a low model based on the normal mapping, the color mapping and the mixed mapping, and performs effect adjustment on the low model to obtain a 3D model;
and the decoding unit is used for decoding the 3D model by using the computer processing equipment and projecting the 3D model onto the display equipment.
3. The VR technology based surgical experience system of claim 1, further comprising a perspective switching unit, wherein an experiencer perspective and a third person perspective are provided in the VR scene, and the scene switching unit is configured to switch between a first person perspective and a third person perspective.
4. The VR technology based surgical experience system of claim 3, wherein the virtual scene includes a pre-operative three-party verification scene, a coordinated anesthesia scene, a surgical scene, and an anesthesia resuscitation scene; wherein the content of the first and second substances,
the preoperative three-party checking scene is constructed with an operating room scene, an experiencer confirms personal information at a first person viewing angle, and when confirming the personal information, a voice question of a doctor is played and answer voice of the experiencer is received;
in the matched anesthesia scene, the experiencer is at a third person called visual angle, the process of establishing a venous channel and a tracheal cannula by a doctor is observed, and voice information and panel information are output to remind the experiencer of matched anesthesia;
in the operation scene, the experiencer switches to a first person viewing angle, a black screen is displayed, and the experiencer is prompted to enter an operation stage;
according to the anesthesia resuscitation scene, an experiencer constructs a resuscitation room scene at a first-person viewing angle, and the experiencer enters resuscitation experience.
5. The VR technology based surgical experience system of claim 1, wherein the haptic rendering module includes a force feedback device, a force feedback drive unit, a force feedback device interface position tracking unit, and a control unit;
the force feedback equipment is a VR helmet handle and is used for performing touch interaction with an experiencer and acquiring operation of the experiencer in a VR scene;
the force feedback driving unit is used for acquiring the operation of an experiencer on the VR helmet handle;
the force feedback device interface position tracking unit is used for connecting the VR helmet handle and the computer processing device and capturing the position of the force feedback device in a VR scene;
the control unit is used for controlling the force feedback equipment.
6. The VR technology based surgical experience system of claim 1, wherein the display device includes a binocular display screen and an external display screen,
the binocular display screen is used for providing VR visual angles for the experiencers;
and the external display screen is used for synchronously displaying the visual angle of the experiencer outside the VR equipment.
7. The VR-technology based surgical experience system of claim 1, wherein the visual rendering module employs a Qsplat-based point cloud model visual rendering algorithm.
CN202111162209.4A 2021-09-30 2021-09-30 Operation experience system based on VR technique Pending CN113889277A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111162209.4A CN113889277A (en) 2021-09-30 2021-09-30 Operation experience system based on VR technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111162209.4A CN113889277A (en) 2021-09-30 2021-09-30 Operation experience system based on VR technique

Publications (1)

Publication Number Publication Date
CN113889277A true CN113889277A (en) 2022-01-04

Family

ID=79004996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111162209.4A Pending CN113889277A (en) 2021-09-30 2021-09-30 Operation experience system based on VR technique

Country Status (1)

Country Link
CN (1) CN113889277A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115068261A (en) * 2022-06-15 2022-09-20 中国人民解放军陆军军医大学第一附属医院 Operation experience bed and use method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115068261A (en) * 2022-06-15 2022-09-20 中国人民解放军陆军军医大学第一附属医院 Operation experience bed and use method thereof

Similar Documents

Publication Publication Date Title
US8714982B2 (en) System and method for teaching social skills, social thinking, and social awareness
US20090311655A1 (en) Surgical procedure capture, modelling, and editing interactive playback
KR101816172B1 (en) The simulation system for training and the method thereof
CN107679519A (en) A kind of multi-modal interaction processing method and system based on visual human
CN112734946B (en) Vocal music performance teaching method and system
CN106997712A (en) A kind of virtual reality oral teaching system
WO2019119314A1 (en) Simulated sandbox system
JP2020080154A (en) Information processing system
CN113889277A (en) Operation experience system based on VR technique
Zhang et al. Exploring virtual environments by visually impaired using a mixed reality cane without visual feedback
Tang et al. Learning to create 3D models via an augmented reality smartphone interface
JP7177208B2 (en) measuring system
CN111276022A (en) Gastroscope simulation operation system based on VR technique
CN112381913A (en) Dynamic pronunciation teaching model construction method based on 3D modeling and oral anatomy
KR100445846B1 (en) A Public Speaking Simulator for treating anthropophobia
Hutchins et al. The design of perceptual representations for practical networked multimodal virtual training environments
Sung et al. Intelligent haptic virtual simulation for suture surgery
Garg et al. Applications of Augmented Reality in Medical Training
Takacs How and Why Affordable Virtual Reality Shapes the Future of Education.
CN105280028A (en) Classroom assistant based on three-dimensional simulation technology
Beever Exploring Mixed Reality Level Design Workflows
Horan The world is what you make it: an application of virtual reality to the tourism industry
KR20230158197A (en) Method for Playing Video Stream for Virtual Simulation
Cheng et al. The Application Status and Thinking of VR in Architecture
Nel Low-Bandwidth transmission of body scan using skeletal animation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination