CN115421594A - Device and method based on virtual training and examination scoring - Google Patents
Device and method based on virtual training and examination scoring Download PDFInfo
- Publication number
- CN115421594A CN115421594A CN202211029249.6A CN202211029249A CN115421594A CN 115421594 A CN115421594 A CN 115421594A CN 202211029249 A CN202211029249 A CN 202211029249A CN 115421594 A CN115421594 A CN 115421594A
- Authority
- CN
- China
- Prior art keywords
- module
- virtual
- trainer
- training
- handheld
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 106
- 238000004088 simulation Methods 0.000 claims abstract description 90
- 230000008859 change Effects 0.000 claims abstract description 30
- 230000003993 interaction Effects 0.000 claims abstract description 25
- 230000004927 fusion Effects 0.000 claims abstract description 11
- 238000004458 analytical method Methods 0.000 claims description 24
- 230000000007 visual effect Effects 0.000 claims description 11
- 238000009877 rendering Methods 0.000 claims description 8
- 230000007812 deficiency Effects 0.000 claims description 3
- 230000000149 penetrating effect Effects 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 2
- 238000012216 screening Methods 0.000 claims description 2
- 230000000474 nursing effect Effects 0.000 abstract description 5
- 230000007547 defect Effects 0.000 abstract description 3
- 238000010253 intravenous injection Methods 0.000 abstract description 3
- 210000004204 blood vessel Anatomy 0.000 description 76
- 210000001519 tissue Anatomy 0.000 description 31
- 238000001467 acupuncture Methods 0.000 description 16
- 230000036544 posture Effects 0.000 description 13
- 230000008569 process Effects 0.000 description 11
- 230000008901 benefit Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000002452 interceptive effect Effects 0.000 description 5
- 230000002792 vascular Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000009826 distribution Methods 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 238000013334 tissue model Methods 0.000 description 3
- 210000003462 vein Anatomy 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 208000002173 dizziness Diseases 0.000 description 2
- 238000007654 immersion Methods 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 238000010255 intramuscular injection Methods 0.000 description 2
- 239000007927 intramuscular injection Substances 0.000 description 2
- 238000001990 intravenous administration Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 210000004872 soft tissue Anatomy 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 208000032912 Local swelling Diseases 0.000 description 1
- 206010028813 Nausea Diseases 0.000 description 1
- 208000003443 Unconsciousness Diseases 0.000 description 1
- 208000012886 Vertigo Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000001802 infusion Methods 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 210000003140 lateral ventricle Anatomy 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000002906 medical waste Substances 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000008693 nausea Effects 0.000 description 1
- 206010033675 panniculitis Diseases 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000015541 sensory perception of touch Effects 0.000 description 1
- 210000004304 subcutaneous tissue Anatomy 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000002861 ventricular Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 231100000889 vertigo Toxicity 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06393—Score-carding, benchmarking or key performance indicator [KPI] analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Resources & Organizations (AREA)
- Educational Administration (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Educational Technology (AREA)
- Computer Graphics (AREA)
- Development Economics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Computer Hardware Design (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- Instructional Devices (AREA)
Abstract
The invention relates to a device and a method based on virtual training and examination scoring, which aim at the defect that a trainer cannot perform repeated training under various conditions when performing invasive nursing operation such as intravenous injection. It includes: the scene module generates three-dimensional virtual models of human tissues with different parameters according to the instruction, and can set an implicit grid on the surface of an inner base layer while establishing an inner base layer model positioned in the human tissues; the handheld simulation module is used for training the three-dimensional virtual model under the established virtual scene by a trainer, and the handheld simulation module can output the change data of the working position of the handheld simulation module, so that the interaction and fusion between the training operation of the trainer and the three-dimensional virtual model of the scene module are realized through the interaction module, and the implicit grid can detect the collision between the handheld simulation module and the inner base layer tissue.
Description
Technical Field
The invention relates to the technical field of virtual nursing training, in particular to a device and a method based on virtual training and examination scoring.
Background
Virtual surgery simulation in the medical field is an important development direction of virtual reality technology. Starting from a two-dimensional medical image sequence, a virtual three-dimensional human organ and soft tissue model is reconstructed in a virtual environment, a real medical operation environment is simulated by combining interaction technologies and equipment such as position tracking and touch feedback, a high-immersion interactive virtual operation environment is provided for an operating doctor, and the doctor can simulate the whole process of clinical operation in an immersive manner when using an operation system. Compared with the traditional operation training method, the virtual operation system can provide a more efficient and accurate training and analysis method for clinicians, and before the surgeons perform operations, the virtual operation system can perform repeated simulation on a computer, so that the surgeons have sufficient information and more time to effectively plan the operations, and problems possibly encountered in the operation process are prevented in advance. Meanwhile, the virtual surgery system has the advantage of repeatability, so that an effective training means can be provided for surgeons, and the training period of a new surgeon on an operating table is shortened.
Patent document CN111798727B discloses a virtual lateral ventricle puncture auxiliary training method, device, equipment and medium. The invention trains the spatialization ability of the student by corresponding the two-dimensional CT and the three-dimensional lateral ventricle, thereby achieving the purpose of training the student to read the CT. The reading training can help the student to select the proper puncture point and puncture depth better according to the CT. The path judgment training is to train the trainee to grasp the own puncture direction, and because the brain is an irregular sphere, but the puncture direction is required to be vertical to the brain, how to grasp the vertical angle is very critical for the trainee, and besides the fact that the trainee consciously accumulates experience in the technical training process, the path judgment training can be used for further improving the operation path planning capability. Although the patent can guide the student to know how to find the proper puncture site and puncture depth, the puncture path planning is established in advance for the student to select, and is only suitable for ventricular puncture. However, the invention cannot help a student to perform puncture training of fine tissues such as blood vessel puncture, especially the medical care has relatively large requirements for intravenous injection and intramuscular injection, the existing training mode cannot effectively correct the problems of the puncture operation of the trainer, and the existing teaching mode cannot simulate various complex nursing environments, especially cannot perform the puncture operation on human tissues with thin blood vessels and unobvious blood vessels.
Therefore, the invention provides a training device which helps a trainer to finish various different nursing operations in a virtual environment by using a virtual training mode, and the device helps a user to correct errors in the operation process, so that the trainer simulates different acupuncture conditions by using the virtual training device, the trainer can finish the puncture training in standard and accurate acupuncture angles, force and displacement under different acupuncture conditions, and the device also improves a virtual environment reconstruction mechanism of a display unit of the virtual training device, thereby reducing the possibility of screen blurring of the trainer.
Furthermore, on the one hand, due to the differences in understanding to the person skilled in the art; on the other hand, since the inventor has studied a lot of documents and patents when making the present invention, but the space is not limited to the details and contents listed in the above, however, the present invention is by no means free of the features of the prior art, but the present invention has been provided with all the features of the prior art, and the applicant reserves the right to increase the related prior art in the background.
Disclosure of Invention
Aiming at the defects of the prior art, the technical scheme of the invention provides a device based on virtual training and examination scoring, which comprises: the scene module generates three-dimensional virtual models of human tissues with different parameters according to the instruction, and can set an implicit grid on the surface of an inner base layer while establishing an inner base layer model positioned in the human tissues; the handheld simulation module is used for training operation of a trainer by utilizing a three-dimensional virtual model under an established virtual scene, and the handheld simulation module can output change data of the working position of the handheld simulation module, so that interaction and fusion between the training operation of the trainer and the three-dimensional virtual model of the scene module are realized through the interaction module, and further the implicit grid can detect collision between the handheld simulation module and an inner base layer tissue. The method has the advantages that in the process of establishing the three-dimensional virtual model consistent with the structure of the real human tissue, the implicit grid for detecting the collision between the virtual medical appliance corresponding to the handheld simulation module and the blood vessel is arranged on the surface of the inner base layer, so that the time and space complexity of the collision detection algorithm in the existing virtual training technology is simplified. And the surface model represented by the inner base layer and the implicit grid and the body model represented by the virtual medical appliance are combined, so that the efficiency of the collision detection algorithm between the medical appliance such as a needle head and a needle tube and the skin and the inner base layer tissue is improved.
According to a preferred embodiment, the scene module and the interaction module are both arranged on a wearing module, and a trainer enters the virtual scene through the wearing module and utilizes the handheld simulation module to perform puncture training on a three-dimensional virtual model in the virtual scene; when the handheld simulation module collides with an inner base layer model positioned in the human tissue represented by the three-dimensional virtual model, the handheld simulation module enters the inner base layer model and is captured by an implicit network attached to the surface of the inner base layer, so that a trainer completes single puncture training. The blood vessel puncture operation method has the advantages that the collision between the handheld simulation module and the blood vessel in the virtual environment is detected, so that the correctness of the puncture training of a trainer and the operation posture associated with the correctness of the puncture operation are effectively analyzed, the operation problem when different puncture errors occur to the trainer is corrected, and the trainer is helped to quickly and effectively perform the blood vessel puncture operation under the conditions of thin blood vessel, thin inner basal layer tissue, unobvious blood vessel and the like.
According to a preferred embodiment, the handheld simulation module is further capable of acquiring a puncture length and a puncture angle of a trainer during each training operation, and the acquired puncture length and puncture angle are corrected by using a position change of the trainer, acquired by the visual unit of the wearable module, when the trainer holds the handheld simulation module, so that the position change of the handheld simulation module in a virtual scene can coincide with the position change in a real scene. The virtual training system has the advantages that the actual hand operation is consistent with the change of the working position of the medical appliance in the virtual environment, and the existing position deviation is corrected in an image acquisition mode, so that a trainer can obtain better virtual training experience.
According to a preferred embodiment, the scene module can also convert the handheld simulation module into a virtual medical appliance in a virtual environment, so that a trainer can change the posture, the angle and the motion state of the handheld virtual medical appliance according to a physical feedback signal sent by the handheld simulation module; when the virtual medical appliance represented by the handheld simulation module is inserted into a blood vessel, the scene module represents the correctness of the training operation of the trainer in a color rendering mode, and when the insertion movement of the virtual medical appliance represented by the handheld simulation module is not detected by the implicit network or the virtual medical appliance represented by the handheld simulation module collides with an inner base layer model secondarily and is detected by the implicit network, the scene module represents the mistake of the training operation of the trainer in a rendering mode by using different colors.
According to a preferred embodiment, the scenario module is further capable of outputting training operation data performed in the virtual scenario established by the scenario module to an analysis module for analysis and classified storage, wherein the analysis module is capable of analyzing and performing data statistics on training conditions of a trainer according to the received training operation data, so as to mark a three-dimensional virtual model on which the trainer needs to continue training operations according to an analysis result.
According to a preferred embodiment, the scene module establishes a three-dimensional virtual model suitable for the trainer according to the marking result made by the analysis module, so that the trainer can perform intensive training aiming at the deficiency of self operation.
According to a preferred embodiment, the visual unit of the wearing module can also screen out the rotation of the head of the trainer according to a preset deflection angle threshold value, so that whether the three-dimensional virtual model in the virtual scene follows the change of the head posture of the trainer or not is determined and reconstructed by the preset deflection angle threshold value.
According to a preferred embodiment, the handheld simulation module monitors the force application condition of the trainer in the process of the training operation by arranging the pressure sensing unit on the surface of the handheld area of the handheld simulation module, and analyzes the force application mode of the trainer in the process of the puncture operation according to the force application condition and the puncture result.
The technical scheme of the invention also provides a method based on virtual training and examination scoring, which at least comprises the following steps: generating three-dimensional virtual models of human tissues with different parameters according to the instructions, and setting an implicit grid on the surface of an inner base layer while establishing an inner base layer model positioned in the human tissues; the trainer trains the three-dimensional virtual model under the established virtual scene and outputs the change data of the working position of the trainer, so that the interaction and fusion between the training operation of the trainer and the three-dimensional virtual model are realized through the interaction module, and the implicit grid can detect the collision between the handheld simulation module and the inner base layer tissue.
According to a preferred embodiment, the scene module and the interaction module are both arranged on a wearing module, a trainer enters the virtual scene through the wearing module, and a puncture training for a three-dimensional virtual model is carried out by utilizing the handheld simulation module under the virtual scene; when the handheld simulation module collides with an inner base layer model positioned inside human tissue represented by the three-dimensional virtual model, the handheld simulation module enters the inner base layer model and is captured by an implicit network attached to the surface of the inner base layer, so that a trainer completes single puncture training.
Drawings
Fig. 1 is a schematic structural diagram of a wearable module of a preferred virtual training and examination scoring-based apparatus and method according to the present invention;
fig. 2 is a schematic workflow diagram of a preferred virtual training and test scoring-based apparatus and method according to the present invention.
List of reference numerals
1: a scene module; 2: a hand-held simulation module; 3: an interaction module; 4: a wearing module; 5: an analysis module; 11: a scene creation unit; 12: a scene output unit; 21: a pressure sensing unit; 41: a vision unit; 42: and a virtual scene display unit.
Detailed Description
The following detailed description is made with reference to the accompanying drawings.
Example 1
The application provides a device based on virtual training and examination are graded, and it includes scene module 1, handheld analog module 2, interactive module 3, dresses module 4 and analysis module 5.
According to a specific embodiment shown in fig. 1 and 2, the scene module 1 is capable of generating a three-dimensional virtual model of human tissue with different parameters in a virtual scene according to an instruction input by a trainer or manager from a control terminal. The scene module 1 builds a plurality of three-dimensional virtual models by using a data set input in advance, so that the existing three-dimensional virtual models are directly called or fine-tuned and reconstructed by using the existing three-dimensional virtual models according to instructions. The scene module 1 can also convert the handheld simulation module 2 held by the trainer into a virtual medical appliance in the virtual scene, so that the trainer can train the three-dimensional virtual model in the virtual scene by using the virtual medical appliance corresponding to the handheld simulation module 2 under the condition of using the wearing module 4. The data fusion between the scene module 1 and the handheld simulation module 2 is realized by using the interaction module 3, so that the change of the working position of the handheld simulation module 2 can be directly converted into the training operation of the virtual medical appliance in the virtual scene on the three-dimensional virtual model.
Preferably, the scene module 1 includes a scene creation unit 11 and a scene output unit 12. The scene creation unit 11 can be used to build a three-dimensional virtual model simulating human tissue in a virtual scene and to build an internal base layer model located within the human tissue inside the three-dimensional virtual model. Preferably, the scene creating unit 11 sets an implicit mesh on the surface of the internal base layer model while creating the internal base layer model, so as to detect a collision between the virtual medical appliance corresponding to the handheld simulation module 2 and the internal base layer model by using the implicit mesh. Preferably, the inner substrate tissue may be a venous or arterial blood vessel. Further preferably, the inner basal layer tissue may be any other subcutaneous tissue or organ. Preferably, the scene creating unit 11 can also complete interaction and image fusion between the three-dimensional virtual model established by the interaction module 3 and the virtual medical appliance corresponding to the handheld simulation module 2, so that the virtual image after fusion is transmitted to the virtual scene display unit 42 of the wearable module 4 by using the scene output unit 12, so that a trainer can control the handheld simulation module 2 to change the working position according to the relative position relationship between the three-dimensional virtual model in the virtual scene and the virtual medical appliance.
Preferably, the present embodiment is described by mainly selecting blood vessels as the inner basal layer tissue. Specifically, the inner base layer model is a blood vessel model, the inner base layer tissue is a blood vessel group, the surface of the inner base layer is a blood vessel wall, and particularly, the surface of the inner base layer provided with the hidden grid is the inner surface of the blood vessel wall. Preferably, when the virtual medical appliance represented by the handheld simulation module 2 is inserted into a blood vessel, the scene module 1 represents the correctness of the training operation of the trainer through a color rendering mode, and when the insertion motion of the virtual medical appliance represented by the handheld simulation module 2 is not detected by the implicit network or the virtual medical appliance represented by the handheld simulation module 2 collides with the blood vessel model secondarily and is detected by the implicit network, the scene module 1 represents the mistake of the training operation of the trainer through a different color rendering mode. The scene creation unit 11 of the scene module 1 can feed back and present training operations of a trainer in a training process in an image rendering manner. For example, when a trainer holds the handheld simulation module 2 to perform blood vessel puncture training, if the virtual medical needle corresponding to the handheld simulation module 2 correctly pierces into a blood vessel, the scene creation unit 11 renders the virtual medical needle to green to indicate that the puncture operation of the needle is correct; when the virtual medical needle does not penetrate into the blood vessel model, the scene creation unit 11 does not render the virtual medical needle to represent that the virtual medical needle does not contact and collide with the blood vessel model, so that a trainer adjusts the position of the handheld simulation module 2 held by the trainer according to a real-time image in the virtual scene; when the moving distance of the virtual medical needle exceeds the size of the blood vessel model and effective puncture is not completed, the scene creating unit 11 renders the virtual medical needle to red to indicate that the virtual medical needle does not effectively enter the blood vessel or the needle penetrates into the blood vessel and then passes out of the blood vessel again although the virtual medical needle collides with the blood vessel model.
Preferably, the handheld simulation module 2 is used for simulating medical appliances such as an injector, a puncture needle and a scalpel which are handheld by a trainer, so that the trainer controls the movement of the virtual medical appliance in a virtual scene by controlling the change of the working position of the handheld simulation module 2, and further realizes the puncture or cutting training operation of the virtual medical appliance on the three-dimensional virtual model. Preferably, the change data of the working position output by the handheld simulation module 2 can be interacted and fused with the three-dimensional virtual model of the scene module 1 by using the interaction module 3, so that the image fusion between the virtual medical appliance and the three-dimensional virtual model is realized in the virtual scene. Preferably, the virtual medical appliance is further capable of extruding or segmenting the three-dimensional virtual model, thereby effectively simulating the deformation and tissue separation of human tissue under the action of the medical appliance. Preferably, the interaction module 3 is capable of analyzing the change of the three-dimensional virtual model caused by the virtual medical appliance when the interaction and fusion between the data generated by the change of the working position of the handheld simulation module 2 and the three-dimensional virtual model are performed, so that when the virtual medical appliance such as a needle is inserted into the blood vessel of the three-dimensional virtual model, the collision between the virtual medical needle and the blood vessel wall is detected through the hidden grid arranged on the blood vessel wall, and a trainer can better grasp whether the puncture operation is correct. Preferably, the surface of the hand-held simulation module 2 is also provided with a pressure sensing unit 21. The analysis module 5 can analyze whether the operation of the trainer has problems or not according to the feedback force of the pressure sensing unit 21 at different time points and the puncture condition corresponding to the time points, so as to make further adjustment or prompt. Specifically, the distribution and the magnitude of the holding pressure detected by the pressure sensing unit 21 can represent the holding posture condition and the force application condition when the trainer uses the handheld simulation module 2, and the analysis module 5 marks error points when the trainer performs the operation by analyzing the data difference between the correct acupuncture operation and the wrong acupuncture operation, so that the trainer can perform the correct acupuncture operation at a proper angle, force and moving distance by reminding the trainer to correct the marking error.
At present, when a medical practitioner punctures, performs intravenous injection and performs intramuscular injection, the medical practitioner often has the problem of poor puncturing effect due to the mistake of the puncturing angle and the puncturing length due to the shortage of the working experience, and therefore, when performing acupuncture training, the medical practitioner needs to analyze the posture and the hand movement of the practitioner, thereby reminding the practitioner of the wrong operation. Specifically, when carrying out intravenous route, because patient's vascularity, all there is obvious difference such as blood vessel diameter, medical personnel appear easily and can't pierce the syringe needle effectively in the vein blood vessel to make patient's acupuncture region appear local swelling, bad phenomena such as pain and blood return, especially to the thin condition of patient's hand vein blood vessel, medical personnel appear easily that can't pinpoint out the position of blood vessel, the syringe needle does not pierce the blood vessel accurately or the syringe needle pierces and passes the vascular wall again after the blood vessel and enter into the condition such as human tissue. Therefore, in order to improve the proficiency of the medical nursing skill of the medical care personnel, the device can simulate the human tissue models with various blood vessel distributions in a virtual scene so as to be used by a trainer for carrying out acupuncture training by using the virtual medical needle. Preferably, the scene module 1 can detect the situation that the handheld simulation module 2 pierces a blood vessel in the three-dimensional virtual model by using an implicit network arranged on a blood vessel wall, and the analysis module 5 analyzes and records the stress situation of the handheld simulation module 2 when the trainer completes the piercing operation, so that after the same trainer completely pierces in the same three-dimensional virtual model, the problem of the hand movement of the trainer is analyzed by comparing the stress situation corresponding to the correct piercing operation with the stress situation corresponding to the incorrect piercing operation. For example, when the trainee controls the virtual medical needle to penetrate a blood vessel, the tip of the virtual medical needle does not enter the blood vessel effectively due to an error in the penetrating position or a deviation in the penetrating angle, but is deviated after colliding with the blood vessel wall or passes out of the blood vessel again after an excessive puncture. In the above case, although the trainer has effectively grasped the distribution of the blood vessel model in the three-dimensional virtual model, it is not possible to effectively place the virtual medical needle in the blood vessel due to an error in the puncture operation.
Preferably, the scene module 1 sets an implicit grid on the vessel wall to detect whether the virtual medical needle is effectively inserted. Preferably, when a blood vessel puncturing operation is performed, the gesture, the puncturing angle and the puncturing length of the medical staff holding the needle head all affect the puncturing effect, for example, the too large inclination angle of the needle head puncturing the blood vessel causes the needle head to easily penetrate the blood vessel, and the front end of the needle head cannot be retained in the blood vessel; if the angle of inclination is too small, the needle is liable to be insufficiently inserted into the blood vessel, and the needle is merely in contact with the blood vessel without inserting the tip of the needle into the blood vessel. Specifically, when the virtual medical needle corresponding to the handheld simulation module 2 collides with a blood vessel wall in the three-dimensional virtual model and passes through the hidden grid for the first time, the virtual medical needle corresponding to the handheld simulation module 2 completes one effective penetration training operation. Although the virtual medical needle corresponding to the handheld virtual module 2 collides with a blood vessel wall and is detected by the implicit grid that the blood vessel wall deforms, when the virtual medical needle does not penetrate the implicit grid, the virtual medical needle corresponding to the handheld simulation module 2 does not complete an effective puncturing training operation, and the reason for the situation may be that a certain angle deviation exists between a puncturing angle and a puncturing position of the needle, so that the needle collides with the blood vessel but cannot effectively enter the blood vessel. Therefore, the above-described operation by the trainer is determined as a typical erroneous operation, and the scenario module 1 can focus on such training in the subsequent training process, thereby improving the operation skill of the trainer. The analysis module 5 can also synchronously remind the trainer of wrong hand postures and force application modes of the invalid puncture conditions, so that the trainer can correct the wrong hand postures and force application modes, and the trainer can accurately puncture the needle into the blood vessel. Preferably, when the virtual medical needle corresponding to the handheld simulation module 2 continuously collides with a blood vessel wall in the three-dimensional virtual model and both sides of the virtual medical needle pass through the hidden grid, the virtual medical needle corresponding to the handheld simulation module 2 does not complete an effective puncture training operation, because after the needle penetrates the blood vessel wall and passes through the hidden grid located on the inner side of the blood vessel wall, the needle passes through the blood vessel wall on the other side again, so that the front end of the needle is not effectively retained in the blood vessel, which may cause that the injection cannot effectively flow in the blood vessel. In addition, the three-dimensional virtual model can simulate the deformation of skin and human tissues due to compression during puncture training, and in this case, the position of the blood vessel in the three-dimensional virtual model can be changed, which can also lead to unsuccessful puncture.
Preferably, the scene module 1 and the interactive module 3 are both disposed on the wearing module 4, and both are in signal connection with the wearing module 4 in a wired or wireless manner. The trainer uses the wearing module 4 to enter a virtual scene, thereby obtaining the information of the three-dimensional virtual model. Preferably, the wearing module 4 can show the fused virtual scene output by the scene output unit 12 to a trainer, so that the trainer can intuitively control the handheld simulation module 2 to change the working position according to the virtual scene shown by the wearing module 4, the trainer can perform acupuncture and tissue separation operation on the three-dimensional virtual model by using the virtual medical appliance corresponding to the handheld simulation module 2, and the virtual scene simulated by the wearing module 4 can also move along with the movement of the trainer, so that the trainer obtains immersive training experience, and the training scene is more fit to a real scene.
Preferably, the wearing module 4 is in the form of a visor, and the surface of the wearing module 4 corresponding to the position of the bridge of the nose is further provided with a visual unit 41 capable of capturing the hand movements of the trainer in the real scene. Preferably, when the handheld simulation module 2 collects the puncture length and puncture angle of the trainer during each training operation, the visual unit 41 can collect the position change of the trainer when holding the handheld simulation module 2 to correct the puncture angle and puncture length of the handheld simulation module 2 introduced into the virtual scene by the interactive module 3, so that the position change of the virtual medical needle in the virtual scene more accurately fits the position change of the handheld simulation module 2 actually. Preferably, the vision unit 41 of the wearable module 4 is further capable of screening out the rotation of the head of the trainer according to a preset deflection angle threshold, so as to determine whether the three-dimensional virtual model in the virtual scene is reconstructed along with the change of the head posture of the trainer by using the preset deflection angle threshold. Specifically, the vision unit 41 can divide whether the scene module 1 needs to reestablish the virtual scene observed by the first angle of view of the trainer according to the head rotation of the trainer according to a preset deflection angle threshold. Specifically, whether real-time coordinates and angles of the three-dimensional virtual model in the virtual scene change along with changes of the pose of the human body is determined by setting a threshold value. For example, the threshold value for the yaw angle may be less than 5 ° of yaw of the trainer's head posture occurring within ten seconds during the training process, since long-period small-angle yaw of the trainer's head may be due to adaptive or unintentional posture adjustment by the trainer. Therefore, in order to reduce the calculation amount of the scene module 1 for reconstructing the virtual scene and avoid the occurrence of adverse symptoms such as vertigo and the like in the virtual scene with frequent shaking changes of the trainer, the stability of the virtual scene established by the scene module is maintained by setting the threshold value of the deflection angle, and the setting of the threshold value can increase the response speed of the scene module 1 and reduce the reconstruction frequency of the image. The image reconstruction performed by the scene module 1 needs a lot of memory, but sometimes the head of the trainer shakes slightly to move the body to relieve the fatigue of the shoulder and neck, so that the trainer can keep a good training state for a long time, and the head of the trainer swings not to change the visual angle or rotate the ring surface. Therefore, the setting of the threshold value can avoid invalid reconstruction, and reduce the interference of the unconscious action of the trainer on image transformation and head position and posture tracking by filtering the drift of the non-visual angle requirement of the trainer, thereby reducing the frequency of the reconstruction of the virtual scene and the three-dimensional virtual model.
The feeling of nausea and dizziness when watching the screen is the most urgent problem to be solved in VR training because the VR system positions the visual field direction according to the head movement at present, and some trainers have slight feeling of dizziness when the trainers rapidly change a plurality of visual focuses in a short time. The virtual training device based on the head position and posture tracking can simulate a virtual operation environment with high reality. The trainer uses computer technology to simulate a real clinical surgical environment and utilizes biomechanics technology to provide tactile sensory feedback, thereby achieving virtual training with high immersion. The virtual training device is used as a novel operation teaching and training platform, provides vivid operation training environments in visual sense, tactile sense and other senses for doctors, can perform repeatable reinforced training on specific operation in the operation process, and greatly reduces the cultivation cost of the operating doctors. Through the development to the creations of surgical objects such as different tissues, organs, human structures and the like and the operation environment, the virtual training device can realize various training modes, and can meet the operation requirements of operation doctors, be repeatedly used, not generate medical waste, and be more efficient and environment-friendly than the traditional training mode. On the virtual operation training platform, the real operation environment is simulated in a virtual reality mode, so that repeated implementation of the training operation is very convenient, the operation training period of medical personnel can be greatly shortened, and the operation training efficiency of students is improved.
Preferably, the scenario module 1 is capable of outputting the training operation data performed in the virtual scenario created by the scenario module to the analysis module 5 for analysis and classification storage. Preferably, the analysis module 5 is capable of analyzing and performing data statistics on the training situation of the trainer according to the received training operation data, so as to mark the three-dimensional virtual model which needs to be repeatedly trained by the trainer according to the analysis result. Preferably, the scene module 1 builds a three-dimensional virtual model suitable for the trainer according to the marking result made by the analysis module 5, so that the trainer can perform intensive training aiming at the deficiency of self operation.
Preferably, the interaction module 3 is capable of completing a rapid intersection test of the virtual medical appliance and the three-dimensional virtual model representing the soft tissue model support in the same coordinate system according to the feedback data of the handheld simulation module 2 and the position parameters of the three-dimensional virtual model in the virtual scene. Preferably, the handheld simulation module 2 can help the trainer adjust the holding posture, angle and amount of motion by means of tactile feedback.
Preferably, aiming at the defect that the time consumption for detecting the collision between the needle tube and the blood vessel body model is long in the prior art, the time and space complexity of a collision detection algorithm is simplified by additionally adding an implicit network which is only attached to the inner wall of the blood vessel on the inner wall of the blood vessel. Preferably, the implicit network refers to drawing-free in visual rendering, and is mainly suitable for quickly detecting collision between the needle tube and the inner wall of the blood vessel and warning according to the collision result so as to prevent the needle tube from passing through the blood vessel wall due to excessive operation of an operator. Compared with a puncture training system and a puncture training method in the prior art, the puncture training system and the puncture training method mainly aim at training the intravenous infusion or muscle puncture operation of medical staff and help the practical medical staff to rapidly improve the medical skills of the practical medical staff. Whether puncture and carry out analysis and judgement for prior art, the acupuncture result that this application accords with clinical situation more to the condition of vein blood vessel acupuncture is categorised, is whether effectively to put into the blood vessel device that carries out the simulation to the syringe needle. In addition, utilize the puncture depth to judge the puncture condition relative to prior art, this application is for distinguishing the puncture result that different acupuncture positions correspond and carry out the analysis.
The invention uses the volume conservation constraint in the deformation model based on the position dynamics and the deformation acceleration method based on the space division in the actual blood vessel deformation, and carries out the modeling of the needle tube by arranging the tetrahedral mesh, thereby effectively detecting the collision between the models and the verification of the experimental result. The mixed tissue geometric model that vascular wall face model and syringe needle body model combined together to improve the efficiency of collision detection algorithm between syringe needle and skin, the vascular wall.
Preferably, the virtual training apparatus of the present invention is further provided with a training artificial limb for detecting a puncture parameter that varies during the operation of the trainer. Training emulation artificial limb can overlap with the three-dimensional virtual model in the virtual scene to for the training person provides the physical feedback that hinders the syringe needle motion when the puncture training, thereby help the training person perception truer puncture effect. Preferably, the training artificial limb may be an arm venipuncture training model as disclosed in patent document CN 111754850B. Further preferably, the training simulation prosthesis to which the present application relates requires only the main structure of the arm venipuncture training model of the prior art, and does not require expensive elements internally related to the vascular segment. Preferably, the artificial limb is trained mainly to provide a real needling force feedback to help the trainer to better master the strength of the needling. Further preferably, the training artificial limb can be further provided with elastic elements such as springs and the like to simulate the deformability of skin and human tissues, so that the strength and the angle of acupuncture applied by a trainer by using the handheld simulation module 2 are monitored through the sensing unit arranged on the training artificial limb, the length of acupuncture is conveniently analyzed according to the deformation of the training artificial limb, the strength, the angle, the length and the like of the acupuncture are analyzed by the virtual training device, and data supplement is performed, so that the hand activity condition of the trainer during training is mastered more accurately.
Example 2
This embodiment is a further improvement of embodiment 1, and repeated contents are not described again.
The embodiment provides a method based on virtual training and examination scoring, which at least comprises the following steps:
generating three-dimensional virtual models of human tissues with different parameters according to the instructions, and setting an implicit grid attached to a blood vessel wall on the blood vessel wall while establishing a blood vessel model in the human tissues;
the trainer trains the three-dimensional virtual model under the established virtual scene and outputs the change data of the working position of the trainer, so that the interaction and the fusion between the training operation of the trainer and the three-dimensional virtual model are realized through the interactive module 3, and then the implicit grid can detect the collision between the handheld simulation module 2 and the vascular wall.
Preferably, the scene module 1 and the interaction module 3 are both arranged on the wearing module 4, and the trainer enters the virtual scene through the wearing module 4 and performs puncture training on the three-dimensional virtual model by using the handheld simulation module 2 in the virtual scene. When the handheld simulation module 2 collides with a blood vessel model located inside a human tissue represented by the three-dimensional virtual model, the handheld simulation module 2 enters the blood vessel model and is captured by an implicit network attached to a blood vessel wall, so that a trainer completes single puncture training.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not intended to be limiting on the claims. The scope of the invention is defined by the claims and their equivalents. Throughout this document, the features referred to as "preferably" are only an optional feature and should not be understood as necessarily requiring that such applicant reserves the right to disclaim or delete the associated preferred feature at any time.
Claims (10)
1. An apparatus based on virtual training and examination scoring, comprising:
the method comprises the following steps that a scene module (1) generates three-dimensional virtual models of human tissues with different parameters according to instructions, and the scene module (1) can also set an implicit grid on the surface of an inner base layer while establishing an inner base layer model positioned in the human tissues;
the handheld simulation module (2) is used for training by a trainer under an established virtual scene by using a three-dimensional virtual model, and the handheld simulation module (2) can output variation data of the working position of the trainer, so that interaction and fusion between the training operation of the trainer and the three-dimensional virtual model of the scene module (1) are realized through the interaction module (3), and further the implicit grid can detect the collision between the handheld simulation module (2) and an inner base layer tissue.
2. The virtual training and examination scoring-based device according to claim 1, wherein the scene module (1) and the interaction module (3) are both arranged on a wearing module (4), a trainer enters the virtual scene through the wearing module (4), and the trainer conducts puncture training for a three-dimensional virtual model by using the handheld simulation module (2) under the virtual scene;
when the handheld simulation module (2) collides with an inner base layer model positioned inside human tissue represented by the three-dimensional virtual model, the handheld simulation module (2) enters the inner base layer model and is captured by an implicit network attached to the surface of the inner base layer, so that a trainer completes single puncture training.
3. The virtual training and examination scoring-based device according to claim 2, wherein the hand-held simulation module (2) is further capable of acquiring a puncture length and a puncture angle of a trainer during each training operation, and the acquired puncture length and puncture angle are corrected by a change in position of the trainer when the trainer grips the hand-held simulation module (2) acquired by the visual unit (41) of the wearable module (4), so that the change in position of the hand-held simulation module (2) in a virtual scene can coincide with the change in position in a real scene.
4. The virtual training and examination scoring based device according to claim 3, wherein the scene module (1) is further capable of converting the handheld simulation module (2) into a virtual medical appliance in a virtual environment, so that a trainer can change the posture, angle and motion state of the handheld virtual medical appliance according to a physical feedback signal sent by the handheld simulation module (2);
when the virtual medical appliance represented by the handheld simulation module (2) penetrates into the inner base layer tissue, the scene module (1) represents the correctness of the training operation of the trainer in a color rendering mode, and when the penetrating motion of the virtual medical appliance represented by the handheld simulation module (2) is not detected by the implicit network or the virtual medical appliance represented by the handheld simulation module (2) collides with the inner base layer model secondarily and is detected by the implicit network, the scene module (1) represents the error of the training operation of the trainer in a rendering mode by using different colors.
5. The virtual training and examination scoring based device according to claim 4, wherein the scenario module (1) is further capable of outputting training operation data performed in the virtual scenario established by the scenario module to the analysis module (5) for analysis and classification storage, wherein,
the analysis module (5) can analyze the training condition of the trainer and perform data statistics according to the received training operation data, so that the three-dimensional virtual model of the trainer needing to continue training operation is marked according to the analysis result.
6. The virtual training and examination scoring-based device according to claim 5, wherein the scene module (1) builds a three-dimensional virtual model suitable for the trainer according to the marking result made by the analysis module (5), thereby enabling the trainer to perform intensive training aiming at the deficiency of self-operation.
7. The virtual training and examination scoring based device according to claim 6, wherein the vision unit (41) of the wearable module (4) is further capable of screening out the rotation of the head of the trainer according to a preset deflection angle threshold, so that the preset deflection angle threshold is used to determine whether the three-dimensional virtual model in the virtual scene is reconstructed following the change of the head posture of the trainer.
8. The virtual training and examination scoring-based device according to claim 7, wherein the hand-held simulation module (2) monitors the force applied by the trainer during the training operation by arranging a pressure sensing unit (21) on the surface of the hand-held area, and analyzes the force applied by the trainer during the puncturing operation according to the force applied and the puncturing result.
9. A method based on virtual training and examination scoring is characterized by at least comprising the following steps:
generating three-dimensional virtual models of human tissues with different parameters according to the instructions, and setting an implicit grid on the surface of an inner base layer while establishing an inner base layer model positioned in the human tissues;
the trainer trains the three-dimensional virtual model under the established virtual scene and outputs the change data of the working position of the trainer, so that the interaction and fusion between the training operation of the trainer and the three-dimensional virtual model are realized through the interaction module (3), and the implicit grid can detect the collision between the handheld simulation module (2) and the inner base layer tissue.
10. The virtual training and examination scoring-based method according to claim 9, wherein the scene module (1) and the interaction module (3) are both arranged on a wearing module (4), a trainer enters the virtual scene through the wearing module (4) and performs puncture training for a three-dimensional virtual model under the virtual scene by using the handheld simulation module (2);
when the handheld simulation module (2) collides with an inner base layer model positioned inside human tissues represented by the three-dimensional virtual model, the handheld simulation module (2) enters the inner base layer model and is captured by an implicit network attached to the surface of the inner base layer, so that a trainer completes single puncture training.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211029249.6A CN115421594A (en) | 2022-08-25 | 2022-08-25 | Device and method based on virtual training and examination scoring |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211029249.6A CN115421594A (en) | 2022-08-25 | 2022-08-25 | Device and method based on virtual training and examination scoring |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115421594A true CN115421594A (en) | 2022-12-02 |
Family
ID=84199565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211029249.6A Pending CN115421594A (en) | 2022-08-25 | 2022-08-25 | Device and method based on virtual training and examination scoring |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115421594A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117789562A (en) * | 2024-02-23 | 2024-03-29 | 湖南安泰康成生物科技有限公司 | Electrode slice application education method, device and system, terminal equipment and storage medium |
-
2022
- 2022-08-25 CN CN202211029249.6A patent/CN115421594A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117789562A (en) * | 2024-02-23 | 2024-03-29 | 湖南安泰康成生物科技有限公司 | Electrode slice application education method, device and system, terminal equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11403964B2 (en) | System for cosmetic and therapeutic training | |
US20220309954A1 (en) | Injection training apparatus using 3d position sensor | |
Coles et al. | Integrating haptics with augmented reality in a femoral palpation and needle insertion training simulation | |
CN110459085A (en) | A kind of human body comprehensive punctures Computer Simulation training and checking device | |
US7731500B2 (en) | Vascular-access simulation system with three-dimensional modeling | |
US20140011173A1 (en) | Training, skill assessment and monitoring users in ultrasound guided procedures | |
US11373553B2 (en) | Dynamic haptic robotic trainer | |
CN112071149A (en) | Wearable medical simulation puncture skill training system and method | |
CN115421594A (en) | Device and method based on virtual training and examination scoring | |
CN115328317A (en) | Quality control feedback system and method based on virtual reality | |
Abounader et al. | An initial study of ingrown toenail removal simulation in virtual reality with bimanual haptic feedback for podiatric surgical training | |
CN114038259A (en) | 5G virtual reality medical ultrasonic training system and method thereof | |
CN114220306B (en) | Traditional Chinese medicine needling manipulation training system and method based on deficiency-excess combination technology | |
WO2024154647A1 (en) | Learning method and learning system | |
KR102414213B1 (en) | Artificial joint implant surgery simulation system | |
CN118334952A (en) | Mongolian medicine three-edged needle-punched knee-eye acupoint virtual-real combined training system | |
GB2519637A (en) | System for injection training | |
CN114237400A (en) | PICC reality augmentation system, reality augmentation method and mobile terminal | |
CN118486218A (en) | Spinal endoscopic surgery simulation training system and method | |
CN113990136A (en) | Medical surgery simulation system based on VR | |
CN116312177A (en) | Peripheral nerve block virtual simulation training system and method under ultrasonic guidance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |