CN116229793A - Training and checking system based on virtual reality technology - Google Patents

Training and checking system based on virtual reality technology Download PDF

Info

Publication number
CN116229793A
CN116229793A CN202310270209.9A CN202310270209A CN116229793A CN 116229793 A CN116229793 A CN 116229793A CN 202310270209 A CN202310270209 A CN 202310270209A CN 116229793 A CN116229793 A CN 116229793A
Authority
CN
China
Prior art keywords
student
server
students
hand
guided
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310270209.9A
Other languages
Chinese (zh)
Inventor
吴瑛
吴芳琴
邓颖
施丽莎
刘铭萱
陈婧楠
王艳玲
王慧莹
丁舒
王秋实
方彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital Medical University
Original Assignee
Capital Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital Medical University filed Critical Capital Medical University
Priority to CN202310270209.9A priority Critical patent/CN116229793A/en
Publication of CN116229793A publication Critical patent/CN116229793A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention relates to a training and checking system based on a virtual reality technology, which comprises a guided learning mode, a feedback training mode and a checking mode; in the guided learning mode, the server displays the action of the correct operation flow in the vision of the student, and the student exercises the operation flow by overlapping the hands of the student with the guided hands according to the prompt of the task navigation list; in a feedback training mode, the server cancels the task navigation list and cancels the display of the guided hand model, makes feedback reminding based on the deviation of the guided hand model and the action of the student, and gives demonstration of the guided hand model; in the examination mode, the server generates scenes and case scenes which are consistent with the examination outline of the student according to the requirement of a teacher so as to carry out examination. The invention simulates the clinical real scene, and evaluates students by judging, deciding and implementing treatment or nursing measures of the change of the illness state of the students under different illness states, rather than evaluating the students in an examination answering mode.

Description

Training and checking system based on virtual reality technology
Technical Field
The invention relates to the technical field of medical training, in particular to a training assessment system based on a virtual reality technology.
Background
As the front development direction of the new generation information technology, the virtual reality technology is penetrating deeply into various fields of various industries, and the market demand is expanding continuously. In the medical and nursing fields, in the existing medical/nursing teaching mode, students lack actual operation experience and opportunity for patient disease treatment and common disease nursing before clinical practice, and the theoretical knowledge abstraction and clinical practice machines are limited, so that the problems of limited training resource allocation and training quality management and control exist. The training quality feedback management is carried out based on the virtual reality technology, and compared with the teaching modes such as books, standard patients, problem-oriented learning, scene simulation and the like, the training quality feedback management can help students or internets to improve clinical thinking ability and clinical decision level. The simulated clinical operation can simplify training difficulty and more important training thinking, so that the practical operation level and practical nursing capability of students or trainees are rapidly improved, the training efficiency is increased without being limited by time, places and manpower and material resources.
Chinese patent CN 110867122a discloses an interactive emergency training system based on virtual reality technology, which comprises: the system comprises a character model generation module, an external chest compression module, an artificial respiration module, a virtual scene generation module, a motion generation module, a capture module and a virtual scene generation module, wherein the capture module is used for capturing motion information of the external chest compression module and the artificial respiration module and generating corresponding virtual motions in the scene; a user management module; and a feedback evaluation module. Different scenes are generated by utilizing a virtual reality technology, and multi-environment and multi-mode emergency training is carried out according to the differences of emergency modes of the different scenes, so that the technology of medical staff on pre-hospital emergency is improved; in addition, the gap can be filled, a disaster scene with hundred percent reduction degree can be created by utilizing the virtual reality technology, even different types of wounded persons can be made, the experienter can personally feel the double sense impression of the mind and the body brought by the disaster, and the tension and the crisis are increased.
Chinese patent CN107393390a discloses a virtual reality first-aid training model person and training system, wherein the virtual reality first-aid training model person comprises a body model person, a tracking system, a transmission processing module, a gyroscope, a pulse position sensor, a chest compression position sensor, a displacement sensor and an air pressure sensor, wherein the gyroscope, the pulse position sensor, the chest compression position sensor, the displacement sensor and the air pressure sensor are connected with the transmission processing module, and the gyroscope is mounted on the head of the body model person; the training system comprises a model person, a receiving module, a computer host, a display screen, digital gloves and a helmet display. The patent makes an operator feel on the scene by constructing a virtual reality scene, achieves the purposes of explanation, training and examination of CPR knowledge, and has the advantages of high safety, low cost and high success rate.
Chinese patent CN114049808A discloses an emergency knowledge training system based on virtual reality, the system comprising: the device comprises a display, a host, a VR head display and a handle controller; the host is electrically connected with the display, the VR head display and the handle controller respectively; the host comprises a virtual reality emergency knowledge training component, and the emergency knowledge training component is matched with the display, the VR head display and the handle controller to carry out emergency knowledge training after operation; the emergency knowledge training component comprises a ruin burying module, a mobile escape module and a success escape prompting module. Because the emergency knowledge is virtually realized through the ruin burying module, the mobile escape module and the successful escape prompting module in the emergency knowledge training module, and the user is placed in a disaster scene in simulation by matching with physical equipment, the interest of the emergency knowledge learning is improved, and meanwhile, the efficient and intelligent emergency knowledge training is realized.
However, the above patent has the drawbacks: the clinical decision scene is not built by the system, virtual operation and interaction are not focused, and therefore the learning aid for a user is low in actual use. Especially for the guidance of training, the design of interactive scenes, figures and object blocks and the design of information acquisition do not consider that a user is a learner without practical operation and only has theoretical knowledge, and even if training is carried out, the learning personnel with theoretical knowledge does not have accumulation of practical experience. The user only knows the result obtained in this way, does not know how to obtain the result, and cannot clearly determine the relationship between the data and the real scene. Thus, the present invention proposes to train by using virtual reality devices for clinical scenario and patient information administration. Besides the training subjects are different from the training subjects (in the school students) focused by the invention, the virtual reality training of the students in the above patents only focuses on a certain relatively small specific operation, rather than global training and observation of the development and the prognosis of the illness state of the patients, so that the students are harder to exercise the clinical thinking and clinical decision-making ability.
Furthermore, there are differences in one aspect due to understanding to those skilled in the art; on the other hand, since the applicant has studied a lot of documents and patents while making the present invention, the text is not limited to details and contents of all but it is by no means the present invention does not have these prior art features, but the present invention has all the prior art features, and the applicant remains in the background art to which the right of the related prior art is added.
Disclosure of Invention
Since the current training system is often built on books, graphics and computer displays, for students, the students can only learn from two dimensions due to the limitations of books, graphics and computer displays. For complex case assistance and nursing, schools train students by adopting methods such as scenario simulation, standard patients, problem-guided learning (PBL), and the like, but have the defects of poor simulation, poor repeatability, time and place limitation, and the like. Especially, the teaching method generally needs to adopt group teaching of small groups, and the limitation of teaching resources, the teaching homogeneity and the like are influenced. The operations it needs to do and the associated knowledge are associated with the two-dimensional angle data. That is, students need to constantly refer to several kinds of information of patients in two-dimensional angles to make diagnosis of diseases and choices of corresponding operations. This makes it necessary for students to view the actual condition of the patient by constantly switching the patient-related forms through the computer display device to make corresponding decisions while training.
Aiming at the defects of the prior art, the invention provides a training assessment system based on a virtual reality technology, and particularly relates to a training quality feedback system based on the virtual reality technology. The system at least comprises a guided learning mode, a feedback training mode and an assessment mode; in the guided learning mode, a server builds a guided hand model matched with a scene in a virtual scene, so that hand actions conforming to a correct operation flow are displayed in the vision of a student, and the student exercises the operation flow by overlapping the hands of the student with the guided hand; in the feedback training mode, the server cancels the display of the guided hand model, makes corresponding feedback reminding based on the deviation between the guided hand model and the current hand of the student, and gives demonstration of the corresponding guided hand model; in the assessment mode, the server extracts corresponding scenes and models from a case database or presets relevant scenes and models by a teacher to carry out assessment. Aiming at the problems that students or interns of the existing medical institutions lack patient condition treatment, learned theoretical knowledge is too abstract, clinical experiment operation is not easy to implement, effective training conditions and management and control training quality are not available, the invention performs training quality feedback management based on a virtual reality technology, and compared with the traditional teaching mode through book theoretical knowledge and laboratory scene simulation, the invention can more truly construct clinical scenes, so that students or interns can learn without time and space limitation. Traditional clinical practice and laboratory simulation teaching can not meet the requirement that the caregivers can carry out personalized nursing on patients according to nursing procedures, and the aim of training the clinical thinking ability of the caregivers is difficult to achieve. The simulated clinical operation can simplify the training difficulty, so that the practical operation capability of students or trainees is improved rapidly. According to the change of the patient's disease state, the invention exercises the students or internists to observe the patient's disease state, judge clinically, make clinical decisions, care and implement, care and guide health, etc., and trains the clinical decision ability and clinical thinking of the students.
According to a preferred embodiment, the server gives a scoring report based on the student's course of operation; the scoring report is at least obtained by analyzing the movement track of the hands of the student and the relative positions of the contact points and the model. The scoring basis is stored in the database, no expert is required to audit and score, after the corresponding steps and operation are completed, the server marks the content as completed in the operation record, and the scoring is fed back to students.
According to a preferred embodiment, the server analyzes the student's hand movement trajectories and the relative positions of the contacts and models and the action bias values of the guided hand model to give the score report.
According to a preferred embodiment, the server performs frame-by-frame comparison based on the difference between the student hand space position and the pilot hand model space position to output the motion deviation value; the server calculates the action deviation value of the hands of the students and the guide hand model frame by frame at least based on the cosine similarity principle.
According to a preferred embodiment, the server is configured to: acquiring the current space position of the hands of the students and taking the current space position as a starting point; respectively acquiring the spatial positions of the hands of the students and the guide hand model, and calculating cosine values of the formed included angles; based on the initial point position, each frame of the student hand and the guide hand model respectively calculates cosine values of included angles formed by the student hand and the guide hand model.
According to a preferred embodiment, the server is further configured to: respectively obtaining the data of the student hand and the guide hand model in the whole examination process through weighted summation and averaging; calculating the absolute value of the difference between the data values of the student hand and the pilot hand model; if the obtained absolute value is smaller than or equal to the action deviation value threshold, judging that the trajectories and the contacts of the hands of the students and the guide hand model are consistent; if the obtained absolute value is larger than the matching degree threshold, judging that the trajectories and the contacts of the hands of the students and the guide hand model are not matched; and obtaining a measurement value of the deviation of the student hand action from the guided hand model action through linear normalization to a matching degree threshold interval, and taking the measurement value as a final score.
According to a preferred embodiment, the server also builds up guidance information in the virtual scene, wherein the guidance information carries out task prompt in the form of a task list, and the guidance information is at least arranged in a visual field interface of a user. The students can obtain task prompts through task lists in the visual field interface in a guided learning mode, can perform operation learning, can obtain key point interpretation, and has the characteristics of no time limit and repeatability. Through the arrangement, the method and the device for acquiring the data of the students simplify the mode and the judging time of the students, reduce the possibility of data errors, improve the interestingness in the training process, and synchronously teach how the students should correspond to relevant equipment in the real world from the object blocks in the virtual scene so as to acquire the relevant patient data.
According to a preferred embodiment, the system further comprises a two-dimensional patient form; the two-dimensional patient form is displayed in a two-dimensional data form; wherein, under the condition that the server establishes at least one virtual scene, a patient form which is extracted from a case database and records attribute information of all object blocks in the virtual scene is arranged on the server; under the virtual scene, converting the object block from a three-dimensional model into the two-dimensional patient form by applying operation actions to the single object block in the virtual scene; the two-dimensional patient form is displayed on at least one display surface within the virtual scene that meets the visual requirements. The two-dimensional patient form is triggered through the preset operation action, so that the problem that the display area is large when data are checked is solved, students can directly know relevant information of the patient through the two-dimensional patient form of the object block, the two-dimensional patient form is convenient to search, and the sources of the information can be known.
According to a preferred embodiment, the operation actions include at least the object block displaying the two-dimensional patient form in case the object block is triggered by a preset action; and when the object block is triggered, the two-dimensional patient form corresponding to the object block is visually displayed in a virtual scene. The data relationship of each patient form in the existing training system is invisible, and a plurality of data elements are simultaneously displayed on the form, so that students are difficult to quickly observe the data change. Thus, the present invention proposes to train by giving patient information using a virtual reality device.
According to a preferred embodiment, the two-dimensional patient form is arranged in at least a virtual scene in such a way that the color, gamma value and/or contrast can be distinguished from its rest or background; when the two-dimensional patient form is displayed in a virtual scene, the server selects the area and features suitable for the two-dimensional patient form presentation by the respective object blocks or background colors in the virtual scene. In a virtual scene, there are a variety of object blocks available for students to choose from. The selection is complex and diverse, and the color blending of each object block is complex, resulting in that if the object block is similar to the color of the rest block or background in the virtual scene, the students can hardly know the required data, and the problem that the students cannot observe easily can not be solved only by means of a two-dimensional patient form. Therefore, the invention sets the display surface of the two-dimensional patient form, so that the server automatically selects the display mode of the two-dimensional patient form.
Drawings
FIG. 1 is a simplified schematic diagram of a scenario of a training assessment system based on virtual reality technology according to a preferred embodiment of the present invention;
fig. 2 is a simplified schematic diagram of one of the scenes viewed by a teacher through an image according to a preferred embodiment of the present invention.
List of reference numerals
1: a server; 2: virtual reality equipment; 3: two-dimensional patient forms; 4: virtual scenes.
Detailed Description
In order to make the above objects, features and advantages of the present invention more comprehensible, the following description of the embodiments accompanied with the accompanying drawings will be given in detail. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The following detailed description refers to the accompanying drawings.
And (3) a server: any type of computer or processing system, including but not limited to a mobile terminal, personal Computer (PC), personal Digital Assistant (PDA), mainframe computer, network device, system or other device or combination of devices capable of storing and processing a database containing patient information. It is broadly defined to encompass any device or combination of devices having at least one processor that executes instructions from a storage medium.
Example 1
The invention relates to a training and checking system based on a virtual reality technology. The system comprises at least a server 1. The server 1 is capable of establishing a connection with a virtual reality VR display device. The server 1 may employ a general-purpose central processing unit CPU (Central Processing Unit), an application specific integrated circuit ASIC (Application Specific Integrated Circuit), a microprocessor, or one or more integrated circuits to execute related instructions or programs to implement the teachings of the present invention. The system further includes a storage unit. The storage unit may be an integral part of the server 1 or may be regarded as an element independent of the server 1. For example, the storage may be implemented in the form of read-only memory ROM (Read Only Memory), random access memory RAM (Random Access Memory), static storage devices, dynamic storage devices, and the like. The storage unit may store an input/output operating system, a data storage management system, a running operating system, and the like of the server 1. When the technical scheme of the present invention is implemented by software or hardware, the relevant program codes may be held in a storage unit and executed by the server 1.
The invention builds a training interaction training system comprising modules of scenes, figures, equipment, things and the like by using a virtual reality technology. The scene comprises: emergency halls, rescue rooms, wards, nurses' stations, etc. The characters include: nurses, doctors, patients, family members, etc. The medical device includes: electrocardiograph, electrocardiograph monitor, infusion pump, etc. The articles comprise hand sanitizer, blood sampling articles and the like. The invention adopts the virtual reality technology to build the virtual scene 4 and produce the simulation visual effect, focuses on the aspects of virtual operation and interaction, and provides a training quality feedback system based on the virtual reality technology. Aiming at the problems that students or interns of the existing medical institutions lack experience of actual operation of patient disease treatment and common disease nursing, the learned theoretical knowledge is too abstract, clinical experiment operation is not easy to implement, and effective training conditions and management and control training quality are lacking, the system carries out training quality feedback management based on the virtual reality technology, and compared with the traditional teaching mode through book theoretical knowledge and laboratory scene simulation, the system can actually construct clinical scenes, so that students or interns can learn without time and space limitation. Traditional clinical practice and laboratory simulation teaching can not meet the requirement that the caregivers can carry out personalized nursing on patients according to nursing procedures, and the aim of training the clinical thinking ability of the caregivers is difficult to achieve. The simulated clinical operation can simplify the training difficulty, so that the practical operation capability of students or trainees is improved rapidly. According to the change of the patient's disease state, the invention exercises the contents of observation, clinical judgment, clinical decision, nursing implementation, personal care, health guidance and the like of the student or the intern on the patient's disease state, and trains the clinical decision capability and clinical thinking of the student.
According to a preferred embodiment, the system further comprises a virtual reality device 2. The virtual reality device 2 comprises an external feedback device. Such as a hand-worn virtual reality device 2 and a head-worn virtual reality device 2. When the user performs the action requiring touch feeling such as standing on the abdomen side, measuring the patient by the hand, the server 1 can perform corresponding feedback according to the established virtual scene 4, thereby playing a role of man-machine interaction. Preferably, the system further comprises a projection module, which is used for throwing the virtual scene 4 based on the virtual reality technology and the virtual hand model used by the students to the expansion screen for image viewing. Preferably, the first person view angle and the third person view angle of the user can be combined to project images, learning is performed simultaneously, and one person operates multi-person learning, so that the homogeneity and the high efficiency of teaching are ensured.
According to a preferred embodiment, the training system based on the virtual reality technology can be divided into three modes, i.e., a guided learning mode, a feedback training mode, and an assessment mode. In the guided learning mode, students can learn the clinical thinking of nursing and the flow of the decision-making of nursing of the system according to the more detailed task prompts. Is suitable for first contacting the system or nursing students with low annual resources. In the guided learning mode, a more detailed task navigation list is provided, and a learner can complete learning according to the task list as a task prompt. The main characteristics of the guided learning mode also include: a detailed task list is provided, and students can specifically complete the content which wants to learn according to the prompting key points. In the feedback training mode, the learner can analyze and decide the task completion sequence and the nursing operation to be contained according to a small amount of task prompts by using clinical thinking and execute the task. In the feedback training mode, after the task points, according to the operation of the students, whether the result is correct or not is displayed and evaluation feedback is given immediately after each small task is completed. In case of irreversible error in the operation of the learner, the learner is informed of the operation error and prompts the correct answer. In the examination mode, students cannot obtain any task prompt, clinical thinking is needed to judge the illness state and the required information in a set time, information is collected through various examinations, independent thinking is needed to be analyzed to obtain a conclusion, and a decision is made. The assessment report can be checked in the learning and training process, and the final practice score and the assessment report can be obtained after the case flow is completed. Preferably, in the above guided learning mode, the server 1 builds a guided hand model matching the virtual scene 4 in the scene for displaying hand motions conforming to the correct operation flow in the student's vision. Students can practice the operation flow by overlapping their own hands with the guide hands. Preferably, in the above-described feedback training mode, the server 1 cancels the display of the guided hand model, but makes a corresponding feedback reminder based on the deviation of the guided hand model from the student's current hand, and gives an demonstration of the corresponding guided hand model, thereby prompting the student of the correct action. Preferably, in the above-mentioned assessment mode, the server 1 extracts corresponding scenes and models from a multitude of case databases or presets related scenes and models by a teacher for assessment. Preferably, the server 1 gives a scoring report or visual scoring of the third perspective by the teacher based on the student's course of operation. Preferably, the scoring report is derived from at least analysis of the movement trajectories and the relative positions of the contacts of the student's hand and the model. Preferably, the server 1 analyzes the student hand movement track and the relative positions of the contacts and the model and the action deviation values of the pilot hand model, thereby giving the above score report. Preferably, in simulating a patient, each condition change or trigger event server 1 provides three types of answers, correct, incorrect, and incorrect, including various operations that may occur. And judging whether the operation of the learner is correct or not according to the existing judging standard in the server 1, and giving a corresponding grading report. The scoring basis of the invention is stored in the database, no expert is required to audit and score, after the corresponding steps and operation are completed, the server 1 marks the content as completed in the operation record, and the scoring is fed back to students. Preferably, in the feedback training mode, the student can make repeated attempts according to own needs to achieve the learning purpose of correct operation. If the operation is correct, the server 1 will adjust accordingly according to the case progress. Preferably, in the assessment mode, the server 1 sets the duration of the corresponding case, the students are free to operate, and the server 1 provides feedback scores after completion.
According to a preferred embodiment, in a guided learning mode, a server constructs a guided hand model matched with a scene in a virtual scene, so as to display hand actions conforming to a correct operation flow in the vision of a student, and the student exercises the operation flow by overlapping the hands of the student with the guided hand according to a task navigation list prompt; in a feedback training mode, the server cancels the task navigation list, cancels the display of the guided hand model, makes corresponding feedback reminding based on the deviation between the guided hand model and the current hand of the student, and gives demonstration of the corresponding guided hand model; in the assessment mode, the server extracts corresponding scenes and models from the case database or presets relevant scenes and models by a teacher for assessment.
Preferably, the server gives a scoring report based on the student's course of operation; the scoring report is at least obtained by analyzing the movement track of the hands of the students, the relevant positions of the contact points and the model and the correct option comparison of the preset scene selected by the students. The scoring report is obtained by analyzing at least the movement track of the hands of the student and the relative positions of the contact points and the model.
According to a preferred embodiment, the server 1 performs frame-by-frame comparison to output motion deviation values based on the difference between the student hand spatial position and the pilot hand model spatial position. The above-mentioned spatial positions include finger tips, palms and wrists based on human hands. Preferably, the server 1 performs a frame-by-frame calculation of motion deviation values for the student's hand and the guided hand model based at least on cosine similarity principles. Specifically, the server 1 acquires the current spatial position of the hands of the student and takes the current spatial position as a starting point; respectively acquiring the spatial positions of the hands of the students and the guide hand model, and calculating cosine values of the formed included angles; based on the initial point position, calculating cosine values of included angles formed by the student hand and the guide hand model respectively every frame of the student hand and the guide hand model; respectively obtaining the data of the student hand and the guide hand model in the whole examination process through weighted summation and averaging; calculating the absolute value of the difference between the data values of the student hand and the pilot hand model; if the obtained absolute value is smaller than or equal to the action deviation value threshold, judging that the trajectories and the contacts of the hands of the students and the guide hand model are consistent; if the obtained absolute value is larger than the matching degree threshold, judging that the trajectories and the contacts of the hands of the students and the guide hand model are not matched; and obtaining a measurement value of the deviation of the student hand action from the guided hand model action through linear normalization to a matching degree threshold interval, and taking the measurement value as a final score.
According to a preferred embodiment, the server 1 gives different final scores based on the differences in the virtual scenario 4 and the case model. The server 1 corrects the final score based on the treatment difficulty, the nursing difficulty and the clinical decision of the simulated patient, because the true score of the student cannot be accurately reflected by the hand action judgment of the student. Preferably, the student's final score is modified based on treatment difficulty, care difficulty, and clinical decisions. Preferably, the different treatments of the different conditions have different addition coefficients in the final score calculation. For example, in handling midwifery procedures, there are many different situations of placenta, umbilical cord, etc., especially where the umbilical cord has the highest addition factor around the neck to calculate the final score. The formula is at least: correction score = final score x umbilical cord windings neck addition factor.
Since the current training system is often built on books, graphics and computer displays, for students, the students can only learn from two dimensions due to the limitations of books, graphics and computer displays. For complex case aids and care, the manipulations and related knowledge that it needs to do are associated with this two-dimensional angle of data. That is, students need to constantly refer to several kinds of information of patients in two-dimensional angles to make diagnosis of diseases and choices of corresponding operations. This makes it necessary for students to view the actual condition of the patient by constantly switching the patient-related forms through the computer display device to make corresponding decisions while training. However, during actual operation, students can acquire current physical state information of patients through inquiry, close-fitting observation, paper documents and real-time data of monitoring equipment beside a sickbed. The existing training system is limited by the limitation of the display surface of the computer equipment, students are trained on paper surfaces, information obtained by the students is disordered, the students only know the obtained results, how the students obtain the information is unknown, and the relation between data and a real scene cannot be clarified. Even if more data is displayed in the computer device, the defect of lack of observation of a real scene cannot be overcome. Therefore, the invention provides the training by giving the patient information by using the virtual reality device, allows students to independently simulate all nursing work of patients through simulating real clinical scenes to a higher degree, verifies and supplements the existing knowledge, carries out complete nursing procedures from evaluation to evaluation, and makes up the gap between the conventional teaching and clinical practice of schools.
According to a preferred embodiment, the database of the system stores two-dimensional patient forms 3. The two-dimensional patient form 3 is displayed as a two-dimensional data table. The two-dimensional patient form 3 can be displayed as a planar graphic in the virtual scene 4. Preferably, in the case that the server 1 establishes at least one virtual scene 4, the server 1 extracts a patient form set from the case database, wherein the patient form set records attribute information of all objects in the virtual scene 4; the server 1 maps the corresponding two-dimensional patient form to the corresponding three-dimensional model based on the object block attribute information recorded in the two-dimensional patient form, so that the object block is converted into the two-dimensional patient form 3 from the three-dimensional model in the virtual scene 4 by applying operation actions to the single object block in the virtual scene 4; the server 1 synchronously and mappingly updates the corresponding two-dimensional patient form based on the update of the three-dimensional model; the update refers to the image deformation caused by the change of the display content instead of the change of the viewing angle of the display image converted from the display image by updating the data content of the corresponding two-dimensional patient form according to the update of the display image when the three-dimensional model converts the display image. The two-dimensional patient form 3 is displayed on at least one display surface within the virtual scene 4 that meets the visual requirements. Preferably, the two-dimensional patient form 3 is associated with a three-dimensional model. That is, when the three-dimensional model performs the corresponding data switching, the two-dimensional patient form 3 also performs the corresponding data switching. For example, when the three-dimensional model is an electrocardiograph, the two-dimensional patient table 3 is switched to the corresponding data based on the switching of electrocardiograph display. Each image of the electrocardiograph (e.g. the last recorded instant electrocardiographic change) is provided with an associated two-dimensional patient form 3 (e.g. last recorded instant electrocardiographic data). The above-mentioned object blocks comprise at least the patient's own virtual character, a series of virtual detection devices arranged at the patient's bedside, and a virtual case table in the virtual scene 4. The object blocks include all virtual models built in the virtual scene 4. The data relationship of each patient form in the existing training system is invisible, and a plurality of data elements are simultaneously displayed on the form, so that students are difficult to quickly observe the data change.
Specifically, the student views the object blocks provided in the virtual scene 4 by wearing the virtual reality device 2 in the virtual scene 4. The object block comprises a virtual character of the patient, a series of virtual detection devices arranged at the bedside of the patient and a virtual case table in the virtual scene 4. The student views the two-dimensional patient form 3 associated with the object block by clicking or other interactive operations to obtain the desired patient information. According to the invention, the virtual scene 4 is built, and the relevant training data are integrated into the object blocks through the virtual reality technology, wherein the object blocks correspond to real equipment or objects, so that students can know the source mode of the data, and the students do not need to acquire information through continuous switching of data forms so as to perform relevant training. When students learn data from the virtual scene 4 to correspond to real-environment clinics, the students can inquire, observe and view the device of the patient, and the limitation of computer training is avoided. The patient information at least comprises patient attribute information, authority attribute information and case attribute information, and can also comprise other required attribute information. Preferably, the server 1 builds the object blocks based on one or more of the patient information; the server 1 determines the two-dimensional patient form 3 and/or the object visualization feature of the object to be displayed based on the hidden information in the patient information. The hidden information refers to the condition of the patient which is indirectly judged by the trainer based on own experience without the specific condition of the patient known by the trainer. For example, the specific condition of the patient and the development of hidden information, and the external signs or internal physiological parameter changes of the patient need to be displayed. The patient attribute information is, for example, a stay time of a patient, a past case, or the like, and information related to the patient can be included in the patient attribute information. The authority attribute information is background information, such as authority levels that an operator can modify to distinguish students from teachers. The teacher can have higher authority attribute information to modify and confirm the virtual scene 4. Case attribute information is, for example, patient information displayed by the object, including electrocardiogram information, respiratory rate, etc., and can also include the current physical state of the patient that can be observed, such as facial features or physiological features. Through the distinction of the patient information, students can judge automatically based on practical experience, and the on-site decision making capability of the students is enhanced.
According to a preferred embodiment, the operation actions comprise at least that the object displays a two-dimensional patient form 3 in case the object is triggered by a preset action. Preferably, the server 1 displays the two-dimensional patient form 3 in the corresponding object block if the user performs a preset action and the preset action meets the condition of triggering the object block. For example, the preset action can be a tap or gesture or the rest of the actions that can interact. In the case where a block is triggered, a two-dimensional patient form 3 corresponding to the block is visually displayed in the virtual scene 4. The preset actions are not limited to the above examples, but can include actions of performing interactions. Through predetermine action triggering two-dimensional patient form 3, the great problem of required show area when having reduced data and having looked over, the student can directly know patient's relevant information through the two-dimensional patient form 3 of thing piece to conveniently seek, also can know the source of information.
According to a preferred embodiment, the two-dimensional patient form 3 is displayed on at least one display surface within the virtual scene 4 that meets visual requirements. In virtual scene 4, there are a variety of tiles available for students to choose from. The selection is complex and diverse, and the color blending of the individual object blocks is complex, resulting in that if the object blocks are similar to the color of the remaining blocks or the background thereof in the virtual scene 4, the students will have difficulty in knowing the required data, and the problem of difficulty in observing the object blocks cannot be solved only by means of the two-dimensional patient form 3. Therefore, the present invention sets the display surface of the two-dimensional patient form 3 so that the server 1 automatically selects the display mode of the two-dimensional patient form 3. Preferably, the two-dimensional patient form 3 is arranged at least in the virtual scene 4 in such a way that the color, gamma value and/or contrast can be distinguished from its rest or background. When the two-dimensional patient form 3 is displayed in the virtual scene 4, the server 1 selects the region and features suitable for presentation of the two-dimensional patient form 3 by the respective object blocks or background colors in the virtual scene 4. For example, in a single background virtual scene 4, the server 1 adjusts the color, gamma value and/or contrast of the two-dimensional patient form 3 by the color class of the background such that the two-dimensional patient form 3 is distinguished from its remaining blocks or backgrounds. Preferably, the two-dimensional patient form 3 is arranged at least in such a way that it is suspended beside the respective object and does not coincide with the object. When a plurality of two-dimensional patient forms 3 are displayed in the virtual scene 4, as the two-dimensional patient forms 3 can be arranged in a suspension mode, the display area of patient data is increased, students can acquire the patient data quickly and judge corresponding operations, assessment events are saved, and information which can be acquired by each object block is known.
According to a preferred embodiment, the two-dimensional patient form 3 can be arranged in the object in such a way that it is embedded in at least three dimensions within the object. For example, after the student triggers the object block with the operation action, the object block can be extended to a cube module including at least three dimensions to three-dimensionally display the two-dimensional patient form 3. Each item of data in the two-dimensional patient form 3 is filled in the cube module in the manner of a slot of the cube module. The cube modules for example constitute the way of presentation of the cube. The purpose of this arrangement is to regularly arrange and manage the relevant information recorded in the two-dimensional patient form 3, so that students can quickly grasp the condition of the patient. The display surface of the invention is different from single-level two-dimensional display, and can also realize regular data acquisition of students and data management of teachers through the collection of the cubic modules. Preferably, with the virtual reality device 2 operating, the line of sight, size and/or sharpness of the cube module can all be selected. Preferably, in the case of operation by a quasi-real device, the cube module is capable of displaying data of each dimension and its inclusion in an omni-directional rotation. The selected face of the cube module is displayed in the student's view. Through the arrangement, the method and the device simplify the data acquisition mode and judgment time of students, reduce the possibility of data errors, improve the interestingness in the training process, and synchronously teach how the students should correspond to relevant equipment in the real world from the object blocks in the virtual scene 4 to acquire relevant patient data. The cube module of a plurality of dimensions can be used for showing the patient information of a plurality of latitudes, realizes the demonstration of a plurality of information through the mode of multidimension degree multiset for the ability that the user obtained the information increases, and the efficiency of searching for the information increases. For teachers, the multi-dimensional cube module can be used for quickly modifying and selecting patient information in the object blocks, the cube module is used for displaying a display data set of a large order of magnitude of a two-dimensional patient form 3 into three-dimensional display, the teacher can control elements contained in the cube module through rotation, dragging, selection and other operations, and the control method is different from the prior art that a plurality of windows are controlled through a two-dimensional display interface. For students, the mode of regularly and dimensionally displaying the patient information in the virtual scene can facilitate the students to acquire related information, and the students can lock the information required by themselves in a multi-angle mode by judging corresponding dimensionality and types without searching the information required by the students from a plurality of complicated windows in a two-dimensional display interface, so that the students can quickly master the patient information, the time wasted in the acquisition of common information in the training process is saved, and the training efficiency is accelerated.
During the learning phase, the learner often learns more knowledge information than can be directly obtained during actual clinical operation, such as part of parameter information which is not displayed on the device, knowledge information such as device principle, electrocardiographic image judgment, etc., but does not represent that the "hidden" information is not important. In contrast, while training and the trainee is actually working on duty, this "hidden" information is required to be memorized in the trainee's mind, while training, the prior art reveals a real-life scenario to the trainee (whether in a real-life device model or virtual manner), the "hidden" information that is directly revealed to the trainee and requires additional memory by the trainee is separate, fragmented and discontinuous. Some prior art does not show the content of the rear part, and some prior art shows the information of the rear part to the students in a way of a single menu list or a way of a physical book, but it is difficult for the students to establish a corresponding relation between hidden information and a certain key feature in an actual operation scene, so that for example, the operation level of some equipment is poor, key steps are easy to forget, no obstacle-removing strategy is learned, and the like, while due to the requirement of 'simulation', the equipment model is difficult to show the 'hidden' information to the students because the mapped real product itself does not show the information. According to the scheme, through intelligent extraction of the two-dimensional form, and based on triggering of specified actions of a learner in the virtual space, three-dimensional mapping two-dimensional peer-to-peer display switching is formed, so that a large amount of 'hidden' information can be displayed on a plurality of display surfaces of equipment in situ, the learner can intuitively establish the relation between the information and the corresponding position, and learn related knowledge content, and the teaching effect can be greatly improved.
According to a preferred embodiment, multiple students can coordinate their entry into a virtual scene for training. Preferably, a plurality of students can synchronously enter the same virtual scene; the server 1 feeds back through real-time judgment correspondence of operation actions of a plurality of students. Preferably, in the case that the server 1 establishes at least one virtual scene 4, the server 1 assigns operation identities or operation tasks corresponding to a plurality of students to perform cooperative training; the server 1 converts the object blocks from a three-dimensional model into a two-dimensional patient form 3 based on a number of operational actions applied by a number of students to one or more object blocks within the virtual scene 4. Multiple students can see each other's avatar and their respective operation actions. The multi-person training is used for training students in rescue work facing disaster sites, and comprises training coordination capacity, training mercy, coordination degree and the like. The multi-person training is used for further simulating a clinical real medical care scene, and the communication cooperation capability of students as a member of a medical team can be further trained on the basis of single person training. Multiple students cooperate to perform patient care training. Multiple students can see each other's avatar and their respective operation actions, and each student's operation on the virtual patient will display feedback through changes in the patient's body value. Multi-person collaboration is used to foster the ability of students to collaborate in clinical practice, with nurses, doctors, and other medical personnel.
According to a preferred embodiment, the server 1 also performs the construction of the guidance information in the virtual scene 4. The guiding information carries out task prompt in the form of a task list. The guiding information is at least arranged in the visual field interface of the user. The guidance information includes: "assessment of pain sites" "providing clinical decisions to the physician. The students can obtain task prompts through task lists in the visual field interface in a guided learning mode, can perform operation learning, can obtain key point interpretation, and has the characteristics of no time limit and repeatability. In addition to the guidance information of the task progress, for an interactable scene, person, object block, etc., when the user operates the visual interface of the guidance information through the virtual reality apparatus 2, the server 1 can prompt this as interactable content in a highlighted manner and/or in an arrow-guided manner to provide related prompt information.
The invention does not input the analysis and thinking process of students, and simulates a clinical real scene, and focuses on checking whether the activities of the students (including clinical thinking and decision points, namely evaluation, diagnosis, planning, implementation, evaluation and the like) under different conditions are correct or not, and checking the clinical thinking and clinical decision capability of the students instead of checking the students in the form of paper answer or single operation check. The student activities are acquired and interacted in a manner that the operation actions and the movement tracks of the virtual reality device 2 are obtained through the server 1. The scoring report of the student is obtained by whether the relevant operation action is performed or not and whether the content of the operation action is correct or not.
Throughout this document, the word "preferably" is used in a generic sense to mean only one alternative, and not to be construed as necessarily required, so that the applicant reserves the right to forego or delete the relevant preferred feature at any time.
It should be noted that the above-described embodiments are exemplary, and that a person skilled in the art, in light of the present disclosure, may devise various solutions that fall within the scope of the present disclosure and fall within the scope of the present disclosure. It should be understood by those skilled in the art that the present description and drawings are illustrative and not limiting to the claims. The scope of the invention is defined by the claims and their equivalents. The description of the invention encompasses multiple inventive concepts, such as "preferably," "according to a preferred embodiment," or "optionally," all means that the corresponding paragraph discloses a separate concept, and that the applicant reserves the right to filed a divisional application according to each inventive concept.

Claims (10)

1. The training and checking system based on the virtual reality technology is characterized by at least comprising a guided learning mode, a feedback training mode and a checking mode; wherein,
In the guided learning mode, a server (1) builds a guided hand model matched with a scene in a virtual scene (4) and is used for displaying hand actions conforming to a correct operation flow in the vision of a student, and the student exercises the operation flow by overlapping the hands of the student with the guided hand;
in the feedback training mode, the server (1) cancels the display of the guided hand model, makes corresponding feedback reminding based on the deviation between the guided hand model and the current hand of the student, and gives demonstration of the corresponding guided hand model;
in the assessment mode, the server (1) extracts corresponding scenes and models from a case database or presets relevant scenes and models by a teacher for assessment.
2. The system according to claim 1, characterized in that the server (1) gives a scoring report based on the student's course of operation; wherein,
the scoring report is obtained by analyzing at least the movement track of the hands of the student and the relative positions of the contact points and the model.
3. The system according to claim 1 or 2, characterized in that the server (1) analyses the student's hand movement trajectories and the relative positions of the contacts and models and the action bias values of the guided hand model, giving the scoring report.
4. A system according to any one of claims 1-3, characterized in that the server (1) performs frame-by-frame comparison to output motion deviation values based on differences in student hand space position and pilot hand model space position; wherein,
the server (1) calculates the action deviation value frame by frame of the hands of the students and the guide hand model at least based on the cosine similarity principle.
5. The system according to any one of claims 1 to 4, characterized in that the server (1) is configured to:
acquiring the current space position of the hands of the students and taking the current space position as a starting point;
respectively acquiring the spatial positions of the hands of the students and the guide hand model, and calculating cosine values of the formed included angles;
based on the initial point position, each frame of the student hand and the guide hand model respectively calculates cosine values of included angles formed by the student hand and the guide hand model.
6. The system according to any one of claims 1 to 5, wherein the server (1) is further configured to:
respectively obtaining the data of the student hand and the guide hand model in the whole examination process through weighted summation and averaging;
calculating the absolute value of the difference between the data values of the student hand and the pilot hand model;
If the obtained absolute value is smaller than or equal to the action deviation value threshold, judging that the trajectories and the contacts of the hands of the students and the guide hand model are consistent;
if the obtained absolute value is larger than the matching degree threshold, judging that the trajectories and the contacts of the hands of the students and the guide hand model are not matched;
and obtaining a measurement value of the deviation of the student hand action from the guided hand model action through linear normalization to a matching degree threshold interval, and taking the measurement value as a final score.
7. The system according to any one of claims 1 to 6, characterized in that the server (1) also builds up guiding information in the virtual scene (4), which guiding information carries out task prompts in the form of task lists, which guiding information is provided at least in the user's visual field interface.
8. The system according to any of claims 1-7, further comprising a two-dimensional patient form (3); the two-dimensional patient form (3) is displayed in a two-dimensional data form; wherein,
when the server (1) establishes at least one virtual scene (4), a patient form which is extracted from a case database and records attribute information of all object blocks in the virtual scene (4) is arranged on the server (1); under the virtual scene (4), converting the object block from a three-dimensional model into the two-dimensional patient form (3) by applying operation actions to the single object block in the virtual scene (4); the two-dimensional patient form (3) is displayed on at least one display surface meeting visual requirements within the virtual scene (4).
9. The system according to any one of claims 1 to 8, characterized in that the operating actions comprise at least the object showing the two-dimensional patient form (3) in case the object is triggered by a preset action; when the object block is triggered, the two-dimensional patient form (3) corresponding to the object block is visually displayed in a virtual scene (4).
10. The system according to any of the claims 1 to 9, characterized in that the two-dimensional patient form (3) is arranged at least in the virtual scene (4) in such a way that the color, gamma value and/or contrast is distinguishable from the rest of the block or background; when the two-dimensional patient form (3) is displayed in the virtual scene (4), the server (1) selects the area and the feature suitable for the display of the two-dimensional patient form (3) through each object block or background color in the virtual scene (4).
CN202310270209.9A 2023-03-15 2023-03-15 Training and checking system based on virtual reality technology Pending CN116229793A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310270209.9A CN116229793A (en) 2023-03-15 2023-03-15 Training and checking system based on virtual reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310270209.9A CN116229793A (en) 2023-03-15 2023-03-15 Training and checking system based on virtual reality technology

Publications (1)

Publication Number Publication Date
CN116229793A true CN116229793A (en) 2023-06-06

Family

ID=86575065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310270209.9A Pending CN116229793A (en) 2023-03-15 2023-03-15 Training and checking system based on virtual reality technology

Country Status (1)

Country Link
CN (1) CN116229793A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117219267A (en) * 2023-11-09 2023-12-12 北京大学第三医院(北京大学第三临床医学院) Method, apparatus, device and medium for simulating and diagnosing malignant hyperthermia
CN117934231A (en) * 2024-03-21 2024-04-26 中山市人民医院 VR-based ECMO guide wire expansion operation method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117219267A (en) * 2023-11-09 2023-12-12 北京大学第三医院(北京大学第三临床医学院) Method, apparatus, device and medium for simulating and diagnosing malignant hyperthermia
CN117219267B (en) * 2023-11-09 2024-02-06 北京大学第三医院(北京大学第三临床医学院) Method, apparatus, device and medium for simulating and diagnosing malignant hyperthermia
CN117934231A (en) * 2024-03-21 2024-04-26 中山市人民医院 VR-based ECMO guide wire expansion operation method
CN117934231B (en) * 2024-03-21 2024-06-04 中山市人民医院 VR-based ECMO guide wire expansion operation method

Similar Documents

Publication Publication Date Title
Hegarty et al. The role of spatial cognition in medicine: Applications for selecting and training professionals
US10417936B2 (en) Hybrid physical-virtual reality simulation for clinical training capable of providing feedback to a physical anatomic model
US20150079565A1 (en) Automated intelligent mentoring system (aims)
Basdogan et al. VR-based simulators for training in minimally invasive surgery
CN116229793A (en) Training and checking system based on virtual reality technology
US20080050711A1 (en) Modulating Computer System Useful for Enhancing Learning
Girau et al. A mixed reality system for the simulation of emergency and first-aid scenarios
Butaslac et al. Systematic review of augmented reality training systems
Kozhevnikov et al. Egocentric versus allocentric spatial ability in dentistry and haptic virtual reality training
McBain et al. Scoping review: The use of augmented reality in clinical anatomical education and its assessment tools
KR20240078420A (en) System and method for management of developmental disabilities based on personal health record
US20200111376A1 (en) Augmented reality training devices and methods
Ng et al. Using immersive reality in training nursing students
Schott et al. Cardiogenesis4d: Interactive morphological transitions of embryonic heart development in a virtual learning environment
CN116312118A (en) Meta hospital system based on augmented reality digital twinning
CN116306523A (en) Medical care scene editing system
CN116822850A (en) Simulation teaching management system
EP3905225A1 (en) System and method for evaluating simulation-based medical training
Saracini et al. Stereoscopy does not improve metric distance estimations in virtual environments
Gupta Investigation of a holistic human-computer interaction (HCI) framework to support the design of extended reality (XR) based training simulators
Cairco et al. Towards simulation training for nursing surveillance
Capogna et al. Teaching the Epidural Block
RU2799123C1 (en) Method of learning using interaction with physical objects in virtual reality
Gießer et al. SkillsLab+− A new way to Teach Practical Medical Skills in an Augmented Reality Application with Haptic Feedback
Rojas-Murillo Identification of key visual areas that guide an assembly process in real and virtual environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination