CN116540881A - Teaching method and system based on VR - Google Patents

Teaching method and system based on VR Download PDF

Info

Publication number
CN116540881A
CN116540881A CN202310767090.6A CN202310767090A CN116540881A CN 116540881 A CN116540881 A CN 116540881A CN 202310767090 A CN202310767090 A CN 202310767090A CN 116540881 A CN116540881 A CN 116540881A
Authority
CN
China
Prior art keywords
scene
information
displaying
hidden
role
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310767090.6A
Other languages
Chinese (zh)
Other versions
CN116540881B (en
Inventor
徐英
黄璐蕾
杨启进
高宝明
唐导
阮韬
马海燕
彭俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Tongai Education Technology Co ltd
Original Assignee
Jiangxi Tongai Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Tongai Education Technology Co ltd filed Critical Jiangxi Tongai Education Technology Co ltd
Priority to CN202310767090.6A priority Critical patent/CN116540881B/en
Publication of CN116540881A publication Critical patent/CN116540881A/en
Application granted granted Critical
Publication of CN116540881B publication Critical patent/CN116540881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention is applicable to the technical field of VR education and provides a teaching method and a teaching system based on VR, wherein the method comprises the following steps: receiving educational scene simulation data through VR glasses, wherein the educational scene simulation data comprises a plurality of educational simulation scenes, and each educational simulation scene is provided with various roles; receiving a scene selection command and a role selection command, and starting scene display; receiving a movement instruction input by a user, wherein the position information of the role changes along with the movement instruction; detecting position information of characters, and displaying corresponding operation selection information when the characters are located in operation position areas, wherein each character in each education simulation scene is provided with a plurality of operation position areas, and each operation position area is correspondingly provided with the operation selection information; and receiving an operation information selection instruction and displaying a corresponding operation picture. The user can immersively experience various education scene events, and the user can make corresponding actions according to own judgment, so that the interaction is good, and real participation is achieved.

Description

Teaching method and system based on VR
Technical Field
The invention relates to the technical field of VR education, in particular to a teaching method and system based on VR.
Background
With the progress of scientific technology, virtual Reality (VR) technology is gradually attracting attention of users, and is a computer simulation system capable of creating and experiencing a Virtual world, generating a simulation environment by using a computer, and is a system simulation of interactive three-dimensional dynamic views and entity behaviors with multi-source information fusion, so that users can be immersed in the Virtual environment to experience a feeling like a Reality. At present, many practice types of courses have difficulty in achieving good effects only through theoretical education, such as public safety education, and students have difficulty practicing in sudden and changeable safety events. Therefore, there is a need to provide an internet-based VR education method and system, which combines VR technology to allow students to practice while theoretical education.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a teaching method and a teaching system based on VR, so as to solve the problems existing in the background art.
The invention is realized in such a way that a teaching method based on VR comprises the following steps:
receiving educational scene simulation data through VR glasses, wherein the educational scene simulation data comprises a plurality of educational simulation scenes, and each educational simulation scene is provided with various roles;
receiving a scene selection command and a character selection command, and starting scene display to select a first view angle of a character to display a selected education simulation scene;
receiving a movement instruction input by a user, enabling the position information of the role to change along with the movement instruction, and displaying a scene of a corresponding position according to the position information;
detecting position information of characters, and displaying corresponding operation selection information when the characters are located in operation position areas, wherein each character in each education simulation scene is provided with a plurality of operation position areas, and each operation position area is correspondingly provided with the operation selection information;
and receiving an operation information selection instruction and displaying a corresponding operation picture.
As a further scheme of the invention: the step of receiving the operation information selection instruction and displaying the corresponding operation picture specifically comprises the following steps:
receiving an operation information selection instruction, and displaying a corresponding operation picture according to a first view angle of a role;
determining the scenario trend of the education simulation scene according to the operation information selection instruction, and displaying the scene corresponding to the scenario;
when the scenario is completed, operation scoring information including a score of each operation information selection instruction and preference information of each operation information selection instruction is displayed.
As a further scheme of the invention: the method further comprises the steps of:
receiving a role operation display command, wherein the role operation display command comprises a role selection;
moving the selected roles according to the track and executing the operation instruction;
a scene under an operation instruction is presented with a first perspective of a selected character.
As a further scheme of the invention: the method further comprises the steps of:
timing is carried out from the beginning of scene display to obtain timing time;
detecting timing time, and displaying hidden task selection information when the timing time reaches a hidden time value of a selected role, wherein each role is correspondingly provided with one or more hidden time values, and each hidden time value is correspondingly provided with hidden task selection information;
and receiving a hidden task selection instruction and displaying a corresponding task execution picture.
As a further scheme of the invention: the step of receiving the hidden task selection instruction and displaying the corresponding task execution picture further comprises the following steps:
after the task execution picture is completed, displaying hidden task scoring information, wherein the hidden task scoring information comprises specific scoring and priority information;
and receiving a role hiding task display command, enabling the selected role to execute the hiding task according to the operation, and displaying a scene for executing the hiding task according to a first view angle of the selected role.
It is another object of the present invention to provide a VR based teaching system, comprising:
the simulation data receiving module is used for receiving education scene simulation data through VR glasses, wherein the education scene simulation data comprises a plurality of education simulation scenes, and each education simulation scene is provided with various roles;
the scene role selection module is used for receiving a scene selection command and a role selection command, starting scene display, and displaying a selected education simulation scene by selecting a first view angle of a role;
the scene position following module is used for receiving a movement instruction input by a user, the position information of the role changes along with the movement instruction, and the scene at the corresponding position is displayed according to the position information;
the character operation module is used for detecting the position information of the characters, displaying corresponding operation selection information when the characters are positioned in the operation position areas, wherein each character in each education simulation scene is provided with a plurality of operation position areas, and each operation position area is correspondingly provided with the operation selection information;
the operation picture display module is used for receiving the operation information selection instruction and displaying a corresponding operation picture.
As a further scheme of the invention: the operation picture display module comprises:
an operation picture display unit for receiving an operation information selection instruction and displaying a corresponding operation picture at a first view angle of the character;
the scenario scene display unit is used for determining scenario trend of the education simulation scene according to the operation information selection instruction and displaying the scene corresponding to the scenario;
and the operation scoring information unit is used for displaying operation scoring information when the scenario is completed, wherein the operation scoring information comprises the score of each operation information selection instruction and the preference information of each operation information selection instruction.
As a further scheme of the invention: the system further includes an operation display module, the operation display module including:
a command input unit for receiving a character operation presentation command including character selection;
a command execution unit for moving the selected character according to the track and executing the operation instruction;
and the operation scene unit is used for displaying the scene under the operation instruction at the first view angle of the selected role.
As a further scheme of the invention: the system also comprises a hidden task execution module, wherein the hidden task execution module comprises:
the scene display timing unit is used for timing from the beginning of scene display to obtain timing time;
the hidden task selection unit is used for detecting the timing time, displaying hidden task selection information when the timing time reaches the hidden time value of the selected role, wherein one or more hidden time values are correspondingly arranged on each role, and each hidden time value corresponds to the hidden task selection information;
and the hidden task execution unit is used for receiving the hidden task selection instruction and displaying a corresponding task execution picture.
As a further scheme of the invention: the hidden task execution module further includes:
the hidden task scoring unit is used for displaying hidden task scoring information after the task execution picture is completed, wherein the hidden task scoring information comprises specific scoring and priority information;
and the hidden task unit is used for receiving a role hidden task display command, enabling the selected role to execute the hidden task according to the operation, and displaying a scene for executing the hidden task according to the first view angle of the selected role.
Compared with the prior art, the invention has the beneficial effects that:
the invention is provided with a plurality of education simulation scenes, each education simulation scene is provided with various roles, when in use, the position information of the roles is detected, when the roles are positioned in the operation position areas, the corresponding operation selection information is displayed, each role in each education simulation scene is provided with a plurality of operation position areas, and each operation position area is correspondingly provided with the operation selection information; the user inputs an operation information selection instruction and displays a corresponding operation picture. Therefore, the user can immersively experience various education scene events, and make corresponding actions according to own judgment, so that the interaction is good, and real participation is achieved.
Drawings
Fig. 1 is a flow chart of a VR based teaching method.
Fig. 2 is a flow chart of a VR based tutorial method for receiving an operation information selection instruction.
Fig. 3 is a flow chart of a VR based tutorial method for receiving a character operation show command.
Fig. 4 is a flow chart of a VR based tutorial method for receiving a hidden task selection instruction.
Fig. 5 is a flow chart of receiving a role hiding task presentation command in a VR based tutorial method.
Fig. 6 is a schematic structural diagram of a VR based teaching system.
Fig. 7 is a schematic structural diagram of an operation screen display module in a VR-based teaching system.
Fig. 8 is a schematic structural diagram of an operation display module in a VR-based teaching system.
Fig. 9 is a schematic structural diagram of a hidden task execution module in a VR based teaching system.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear, the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Specific implementations of the invention are described in detail below in connection with specific embodiments.
As shown in fig. 1, an embodiment of the present invention provides a teaching method based on VR, which includes the following steps:
s100, educational scene simulation data are received through VR glasses, the educational scene simulation data comprise a plurality of educational simulation scenes, and various roles are arranged in each educational simulation scene;
s200, receiving a scene selection command and a role selection command, starting scene display, and displaying a selected education simulation scene by selecting a first view angle of a role;
s300, receiving a movement instruction input by a user, enabling position information of a role to follow the change, and displaying a scene of a corresponding position according to the position information;
s400, detecting position information of characters, and displaying corresponding operation selection information when the characters are located in operation position areas, wherein each character in each education simulation scene is provided with a plurality of operation position areas, and each operation position area is correspondingly provided with the operation selection information;
s500, receiving an operation information selection instruction and displaying a corresponding operation picture.
It should be noted that, at present, many practice types of courses are difficult to achieve better effects only through theoretical education, such as public security education, students are difficult to practice in sudden and changeable security events, and embodiments of the present invention aim to solve the above problems.
In the embodiment of the invention, when the user is subjected to the demonstration of the education scene, the user needs to wear the VR glasses and hold the operation handle, the VR glasses can receive the education scene simulation data, the education scene simulation data comprises a plurality of education simulation scenes, and different roles are arranged in each education simulation scene. For example, for safety education, the education simulation scene may include various safety burst scenes, and the characters in the scenes may be students, general residents, security, police, medical staff, firefighters, etc., and when the user determines the education simulation scene and the characters to be experienced, the scene presentation formally starts by inputting a scene selection command and a character selection command through the operation handle to select a first view angle of the characters to present the selected education simulation scene. Thus, the user can immersively experience various educational scene events, and the memory is profound. Then, a user inputs a movement instruction through an operation handle, so that the position information of the character follows the change, and a scene at a corresponding position is displayed according to the position information.
When the roles are located in the operation position areas, corresponding operation selection information is displayed, a plurality of operation position areas are arranged on each role in each education simulation scene, the operation selection information is correspondingly arranged in each operation position area, that is, different tasks are required to be executed when different roles are located in different places, a user makes an operation information selection instruction according to the operation selection information, and a corresponding operation picture is displayed. Therefore, the user can not only immersively experience various education scene events, but also can make corresponding actions according to own judgment, the interactivity is good, the purpose of practical education is easily achieved, and real participation is achieved.
As for the above-described operation position area, in the present invention, a plurality of levels may be classified according to the degree of importance. In the case of the grading, the calculation is carried out according to the own position attribute of the operation position area and the difficulty coefficient of all different tasks required to be executed in each operation position area.
Specifically, for each operation location area, a location area comprehensive score value needs to be calculated correspondingly, and then the comparison is performed in a preset score grade table according to the calculated location area comprehensive score value, so that the grade corresponding to the current operation location area is determined. In the present embodiment, the calculation formula of the position area integrated score value corresponding to each operation position area is expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the integrated score value of the corresponding position area of each operation position area, < >>Representing the position area standard score value corresponding to each operation position area, <>Serial number representing task->Representing the maximum number of tasks>,/>Indicate->Difficulty coefficient value of individual task->The benchmark difficulty coefficient value representing the task.
It can be understood that after the location area integrated score value corresponding to each operation location area is calculated, the levels corresponding to the current operation location area can be determined by comparing the calculated location area integrated score values in a preset score level table. In the present invention, the operation position area is divided into a first stage, a second stage, and a third stage according to the degree of importance from low to high. As mentioned above, different roles in different places require different tasks to be performed. In this embodiment, the higher the importance degree is, the more and more important tasks to be executed in the corresponding operation location area are indicated, and through the level division of the importance degree, the experimenter can be effectively reminded to intentionally perform the distinguishing learning in different operation location areas, so as to realize the better teaching effect-!
As shown in fig. 2, as a preferred embodiment of the present invention, the step of receiving the operation information selection instruction and displaying the corresponding operation screen specifically includes:
s501, receiving an operation information selection instruction, and displaying a corresponding operation picture at a first view angle of a role;
s502, determining the scenario trend of the education simulation scene according to the operation information selection instruction, and displaying the scene corresponding to the scenario;
s503, when the scenario is completed, operation scoring information is displayed, the operation scoring information including a score of each operation information selection instruction, and preference information of each operation information selection instruction.
In the embodiment of the invention, the scenario trend of the education simulation scene is determined according to the operation information selection instruction, the scene corresponding to the scenario is displayed, the scene corresponding to each scenario is established in advance, when the scenario is finished, the operation scoring information is displayed, and the operation scoring information comprises the score of each operation information selection instruction and the preference information of each operation information selection instruction. It should be noted that, when the user inputs different operation information selection instructions, different effects will be generated, and the user scores according to the effect, and the advantage and disadvantage information refers to the advantage and/or disadvantage of executing the operation information selection instruction. Thus, the user knows how to do so later when facing the same scene, and has good educational significance.
As shown in fig. 3, as a preferred embodiment of the present invention, the method further includes:
s601, receiving a role operation display command, wherein the role operation display command comprises role selection;
s602, enabling the selected roles to move according to the track and executing operation instructions;
s603, displaying the scene under the operation instruction with the first view of the selected character.
In the embodiment of the invention, if a user wants to know the corresponding behavior of a certain character when facing a security event, the user can directly input a character operation display command, so that the selected character moves according to a track and executes an operation instruction, and a scene under the operation instruction is displayed in a first view angle of the selected character. Therefore, the user can know what each role should do, so that the user can know how to do in reality, can actively cooperate with other roles, and has better education effect.
As shown in fig. 4, as a preferred embodiment of the present invention, the method further includes:
s701, timing is performed from the beginning of scene display to obtain timing time;
s702, detecting timing time, and displaying hidden task selection information when the timing time reaches a hidden time value of a selected role, wherein each role is correspondingly provided with one or more hidden time values, and each hidden time value is correspondingly provided with hidden task selection information;
s703, receiving a hidden task selection instruction, and displaying a corresponding task execution picture.
In the embodiment of the invention, the timing time is set, the timing time is detected, and when the timing time reaches the hidden time value of the selected role, the hidden task selection information is displayed. For example, after a fire event occurs for 8 minutes, a firefighter arrives, a role (such as security) selected by a user just can cooperate with the firefighter to rapidly perform firefighting work, at this time, hidden task selection information is popped up to display a plurality of hidden tasks, then the user can input a hidden task selection instruction to display a corresponding task execution picture, so that the user can be educated to learn to cooperate with each other when facing an emergency, the confusion is avoided, and the whole education process is more real.
Since in the whole scene, it is possible to include, in addition to firefighters, students, ordinary residents, security, police, medical staff, etc. Each different professional identity determines its corresponding skill level, and in this embodiment, the corresponding hidden task may be set according to the skill specifications of different professional groups.
In this embodiment, according to the skill characteristics of each different professional crowd, the corresponding comprehensive rescue capability value is calculated, and the corresponding formula is expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,indicating the current rescue ability value of the person, < +.>Basic rescue ability value representing the current person between 18 and 60 years of age,/for example>Weight coefficient representing age term, +.>Weight coefficient representing skill item, ++>Basic score representing age term,/->Basic score representing skill item,/->Representing the actual age of the current person,/->Representing the%>Score corresponding to item skill,/->Representing the maximum number of terms of skill.
It can be appreciated that, according to the above formula, for the current person aged less than 18 years or more than 60 years, it is considered by default that it does not have the ability to assist rescue, and thus its corresponding comprehensive rescue ability value is set to 0. And for the current personnel with ages between 18 and 60 years, the corresponding comprehensive rescue capability value is calculated by simultaneously integrating the specific ages and the corresponding owned skills, so that the rescue tasks are preferentially and reasonably allocated according to the comprehensive rescue capability value in the actual rescue work, and other personnel except firefighters in the whole scene are furthest mobilized to participate in rescue, and the loss is furthest reduced. Generally, rescue tasks are sequentially allocated according to the sequence from the large value to the small value of the comprehensive rescue capability.
As shown in fig. 5, as a preferred embodiment of the present invention, the step of receiving the hidden task selection instruction and displaying the corresponding task execution screen further includes:
s704, after the task execution picture is completed, displaying hidden task scoring information, wherein the hidden task scoring information comprises specific scoring and priority information;
s705, receiving a role hiding task display command, enabling the selected role to execute the hiding task according to the operation, and displaying a scene for executing the hiding task according to a first view of the selected role.
In the embodiment of the invention, the hidden task condition executed by the user is scored, in addition, the selected role can execute the hidden task according to the operation, and the scene for executing the hidden task is displayed according to the first view angle of the selected role, so that the user can learn conveniently.
As shown in fig. 6, an embodiment of the present invention further provides a VR-based teaching system, where the system includes:
a simulation data receiving module 100 for receiving educational scene simulation data through VR glasses, the educational scene simulation data including a plurality of educational simulation scenes, each of which is provided with various characters therein;
a scene character selection module 200, configured to receive a scene selection command and a character selection command, start scene presentation, and display a selected education simulation scene by selecting a first perspective of a character;
the scene position following module 300 is configured to receive a movement instruction input by a user, change the position information of a character along with the movement instruction, and display a scene at a corresponding position according to the position information;
the character operation module 400 is configured to detect position information of characters, and when the characters are located in operation position areas, display corresponding operation selection information, where each character in each education simulation scene is provided with a plurality of operation position areas, and each operation position area is correspondingly provided with operation selection information;
the operation screen display module 500 is configured to receive an operation information selection instruction and display a corresponding operation screen.
In the embodiment of the invention, when a user performs the exhibition of an education scene, the user needs to wear VR glasses and holds an operation handle, the VR glasses can receive education scene simulation data, the education scene simulation data comprises a plurality of education simulation scenes, different roles are arranged in each education simulation scene, for example, the education simulation scenes can comprise various security burst scenes, the roles in the scenes can be students, ordinary residents, security guards, police officers, medical staff, firefighters and the like, after the user determines the education simulation scenes and roles which want to be experienced, the user inputs a scene selection command and a role selection command through the operation handle, the scene exhibition formally starts to select a first visual angle of the roles to exhibit the selected education simulation scenes, and thus, the user can experience various education scene events in an immersive manner and memorize profound. Then, a user inputs a movement instruction through an operation handle to enable position information of a character to follow the change, a scene of a corresponding position is displayed according to the position information, the embodiment of the invention can detect the position information of the character in real time, when the character is located in an operation position area, corresponding operation selection information is displayed, each character in each education simulation scene is provided with a plurality of operation position areas, each operation position area is correspondingly provided with the operation selection information, that is, different tasks are required to be executed when different characters are located in different places, the user makes the operation information selection instruction according to the operation selection information, and a corresponding operation picture is displayed. Therefore, the user can not only immersively experience various education scene events, but also can make corresponding actions according to own judgment, the interactivity is good, the purpose of practical education is easily achieved, and real participation is achieved.
As shown in fig. 7, as a preferred embodiment of the present invention, the operation screen display module 500 includes:
an operation screen display unit 501 configured to receive an operation information selection instruction, and display a corresponding operation screen at a first view angle of a character;
the scenario scene display unit 502 is configured to determine scenario trend of the education simulation scene according to the operation information selection instruction, and display a scene corresponding to the scenario;
an operation scoring information unit 503 for displaying operation scoring information when the scenario is completed, the operation scoring information including a score of each operation information selection instruction, and preference information of each operation information selection instruction.
As shown in fig. 8, as a preferred embodiment of the present invention, the system further includes an operation presentation module 600, and the operation presentation module 600 includes:
a command input unit 601 for receiving a character operation presentation command including a character selection;
a command execution unit 602 for causing the selected character to move in accordance with the trajectory and executing the operation instruction;
an operation scene unit 603 for exposing a scene under an operation instruction at a first view of a selected character.
As shown in fig. 9, as a preferred embodiment of the present invention, the system further includes a hidden task execution module 700, and the hidden task execution module 700 includes:
the scene display timing unit 701 is configured to perform timing from the beginning of scene display, so as to obtain timing time;
a hidden task selection unit 702, configured to detect a timing time, and when the timing time reaches a hidden time value of a selected role, display hidden task selection information, where each role is correspondingly provided with one or more hidden time values, and each hidden time value is correspondingly provided with hidden task selection information;
the hidden task execution unit 703 is configured to receive a hidden task selection instruction, and display a corresponding task execution screen.
As shown in fig. 9, as a preferred embodiment of the present invention, the hidden task execution module 700 further includes:
a hidden task scoring unit 704, configured to display hidden task scoring information after the task execution screen is completed, where the hidden task scoring information includes specific scoring and preference information;
and the hidden task unit 705 is configured to receive a role hidden task display command, so that the selected role executes the hidden task according to the operation, and display a scene of executing the hidden task with the first view angle of the selected role.
The foregoing description of the preferred embodiments of the present invention should not be taken as limiting the invention, but rather should be understood to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in various embodiments may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A VR-based teaching method, comprising the steps of:
receiving educational scene simulation data through VR glasses, wherein the educational scene simulation data comprises a plurality of educational simulation scenes, and each educational simulation scene is provided with various roles;
receiving a scene selection command and a character selection command, and starting scene display to select a first view angle of a character to display a selected education simulation scene;
receiving a movement instruction input by a user, enabling the position information of the role to change along with the movement instruction, and displaying a scene of a corresponding position according to the position information;
detecting position information of roles, displaying corresponding operation selection information when the roles are located in operation position areas, wherein each role in each education simulation scene is provided with a plurality of operation position areas, each operation position area is correspondingly provided with operation selection information, each operation position area is correspondingly provided with a position area comprehensive score value, and the position area comprehensive score value is used for reminding an experimenter to perform conscious differential learning in different operation position areas by determining the corresponding grade of the current operation position area;
and receiving an operation information selection instruction and displaying a corresponding operation picture.
2. The VR-based teaching method of claim 1, wherein the step of receiving the operation information selection instruction and displaying the corresponding operation screen specifically includes:
receiving an operation information selection instruction, and displaying a corresponding operation picture according to a first view angle of a role;
determining the scenario trend of the education simulation scene according to the operation information selection instruction, and displaying the scene corresponding to the scenario;
when the scenario is completed, operation scoring information including a score of each operation information selection instruction and preference information of each operation information selection instruction is displayed.
3. The VR-based teaching method as set forth in claim 2, wherein the method for determining the level corresponding to the current operation location area comprises the steps of:
calculating a position area comprehensive score value corresponding to each operation position area, and comparing the position area comprehensive score values in a preset score grade table according to the position area comprehensive score values to determine and obtain a grade corresponding to the current operation position area;
the calculation formula of the comprehensive score value of the location area is expressed as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the integrated score value of the corresponding position area of each operation position area, < >>Representing the position area standard score value corresponding to each operation position area, <>Serial number representing task->Representing the maximum number of tasks>,/>Indicate->Difficulty coefficient value of individual task->The benchmark difficulty coefficient value representing the task.
4. The VR based teaching method of claim 3, further comprising:
receiving a role operation display command, wherein the role operation display command comprises a role selection;
moving the selected roles according to the track and executing the operation instruction;
a scene under an operation instruction is presented with a first perspective of a selected character.
5. The VR based teaching method of claim 4, further comprising:
timing is carried out from the beginning of scene display to obtain timing time;
detecting timing time, and displaying hidden task selection information when the timing time reaches a hidden time value of a selected role, wherein each role is correspondingly provided with one or more hidden time values, and each hidden time value is correspondingly provided with hidden task selection information;
and receiving a hidden task selection instruction and displaying a corresponding task execution picture.
6. The VR-based teaching method of claim 5, wherein the step of receiving a hidden task selection instruction and displaying a corresponding task execution screen comprises:
after the task execution picture is completed, displaying hidden task scoring information, wherein the hidden task scoring information comprises specific scoring and priority information;
and receiving a role hiding task display command, enabling the selected role to execute the hiding task according to the operation, and displaying a scene for executing the hiding task according to a first view angle of the selected role.
7. The VR based teaching method of claim 6, further comprising:
calculating the comprehensive rescue capability value of the current personnel in the scene, and sequentially assigning rescue tasks according to the sequence from the large to the small of the comprehensive rescue capability value, wherein the comprehensive rescue capability value is used for maximally mobilizing other personnel except firefighters in the whole scene to participate in rescue;
the calculation formula of the comprehensive rescue capability value of the current person in the scene is expressed as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device, indicating the current rescue ability value of the person, < +.>Basic rescue ability value representing the current person between 18 and 60 years of age,/for example>Weight coefficient representing age term, +.>The weight coefficients representing the skill items are used,basic score representing age term,/->Basic score representing skill item,/->Representing the actual age of the current person,/->Representing the%>Score corresponding to item skill,/->Representing the maximum number of terms of skill.
8. VR-based teaching system, characterized in that it applies a VR-based teaching method as claimed in any of the previous claims 1 to 7, said system comprising:
the simulation data receiving module is used for receiving education scene simulation data through VR glasses, wherein the education scene simulation data comprises a plurality of education simulation scenes, and each education simulation scene is provided with various roles;
the scene role selection module is used for receiving a scene selection command and a role selection command, starting scene display, and displaying a selected education simulation scene by selecting a first view angle of a role;
the scene position following module is used for receiving a movement instruction input by a user, the position information of the role changes along with the movement instruction, and the scene at the corresponding position is displayed according to the position information;
the character operation module is used for detecting the position information of the characters, displaying corresponding operation selection information when the characters are positioned in the operation position areas, wherein each character in each education simulation scene is provided with a plurality of operation position areas, and each operation position area is correspondingly provided with the operation selection information;
the operation picture display module is used for receiving the operation information selection instruction and displaying a corresponding operation picture;
the operation picture display module comprises:
an operation picture display unit for receiving an operation information selection instruction and displaying a corresponding operation picture at a first view angle of the character;
the scenario scene display unit is used for determining scenario trend of the education simulation scene according to the operation information selection instruction and displaying the scene corresponding to the scenario;
an operation scoring information unit for displaying operation scoring information when the scenario is completed, the operation scoring information including a score of each operation information selection instruction and preference information of each operation information selection instruction;
the system further includes an operation display module, the operation display module including:
a command input unit for receiving a character operation presentation command including character selection;
a command execution unit for moving the selected character according to the track and executing the operation instruction;
and the operation scene unit is used for displaying the scene under the operation instruction at the first view angle of the selected role.
9. The VR based teaching system of claim 8, further comprising a hidden task execution module comprising:
the scene display timing unit is used for timing from the beginning of scene display to obtain timing time;
the hidden task selection unit is used for detecting the timing time, displaying hidden task selection information when the timing time reaches the hidden time value of the selected role, wherein one or more hidden time values are correspondingly arranged on each role, and each hidden time value corresponds to the hidden task selection information;
and the hidden task execution unit is used for receiving the hidden task selection instruction and displaying a corresponding task execution picture.
10. The VR based teaching system of claim 9, wherein the hidden task execution module further comprises:
the hidden task scoring unit is used for displaying hidden task scoring information after the task execution picture is completed, wherein the hidden task scoring information comprises specific scoring and priority information;
and the hidden task unit is used for receiving a role hidden task display command, enabling the selected role to execute the hidden task according to the operation, and displaying a scene for executing the hidden task according to the first view angle of the selected role.
CN202310767090.6A 2023-06-27 2023-06-27 Teaching method and system based on VR Active CN116540881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310767090.6A CN116540881B (en) 2023-06-27 2023-06-27 Teaching method and system based on VR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310767090.6A CN116540881B (en) 2023-06-27 2023-06-27 Teaching method and system based on VR

Publications (2)

Publication Number Publication Date
CN116540881A true CN116540881A (en) 2023-08-04
CN116540881B CN116540881B (en) 2023-09-08

Family

ID=87452747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310767090.6A Active CN116540881B (en) 2023-06-27 2023-06-27 Teaching method and system based on VR

Country Status (1)

Country Link
CN (1) CN116540881B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671320A (en) * 2018-12-12 2019-04-23 广东小天才科技有限公司 Rapid calculation exercise method based on voice interaction and electronic equipment
CN110379212A (en) * 2019-07-29 2019-10-25 郑州幻视科技有限公司 A kind of VR logistics simulation system for teaching
CN110428683A (en) * 2019-07-11 2019-11-08 天津生态城投资开发有限公司 Safety training method and device based on VR
WO2020176803A1 (en) * 2019-02-27 2020-09-03 Siminsights, Inc. Augmented reality and virtual reality systems
KR20210069554A (en) * 2019-12-03 2021-06-11 한국로봇융합연구원 Trainee testing and evaluating apparatus for disaster response rovot in live-virtual-constructive environment
CN113534961A (en) * 2021-08-06 2021-10-22 北京鼎普科技股份有限公司 Secret education training method and system based on VR
CN114247141A (en) * 2021-11-09 2022-03-29 腾讯科技(深圳)有限公司 Method, device, equipment, medium and program product for guiding task in virtual scene
KR20220076216A (en) * 2020-11-30 2022-06-08 빅픽쳐스 주식회사 National Competency Standards based Construction Machine Crane Training System
CN114602182A (en) * 2022-02-10 2022-06-10 腾讯科技(深圳)有限公司 Game information processing method and device and computer equipment
CN114860373A (en) * 2022-06-02 2022-08-05 北京新唐思创教育科技有限公司 Online classroom teaching interaction method, device, equipment and medium
CN115509365A (en) * 2022-11-08 2022-12-23 合肥云艺化科技有限公司 Building construction experience method and system based on VR technology
CN115909848A (en) * 2023-02-27 2023-04-04 湖南智连方舟工程科技有限公司 VR engineering detection training method and system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671320A (en) * 2018-12-12 2019-04-23 广东小天才科技有限公司 Rapid calculation exercise method based on voice interaction and electronic equipment
WO2020176803A1 (en) * 2019-02-27 2020-09-03 Siminsights, Inc. Augmented reality and virtual reality systems
US20220172633A1 (en) * 2019-02-27 2022-06-02 Siminsights, Inc. Augmented reality and virtual reality systems
CN110428683A (en) * 2019-07-11 2019-11-08 天津生态城投资开发有限公司 Safety training method and device based on VR
CN110379212A (en) * 2019-07-29 2019-10-25 郑州幻视科技有限公司 A kind of VR logistics simulation system for teaching
KR20210069554A (en) * 2019-12-03 2021-06-11 한국로봇융합연구원 Trainee testing and evaluating apparatus for disaster response rovot in live-virtual-constructive environment
KR20220076216A (en) * 2020-11-30 2022-06-08 빅픽쳐스 주식회사 National Competency Standards based Construction Machine Crane Training System
CN113534961A (en) * 2021-08-06 2021-10-22 北京鼎普科技股份有限公司 Secret education training method and system based on VR
CN114247141A (en) * 2021-11-09 2022-03-29 腾讯科技(深圳)有限公司 Method, device, equipment, medium and program product for guiding task in virtual scene
CN114602182A (en) * 2022-02-10 2022-06-10 腾讯科技(深圳)有限公司 Game information processing method and device and computer equipment
CN114860373A (en) * 2022-06-02 2022-08-05 北京新唐思创教育科技有限公司 Online classroom teaching interaction method, device, equipment and medium
CN115509365A (en) * 2022-11-08 2022-12-23 合肥云艺化科技有限公司 Building construction experience method and system based on VR technology
CN115909848A (en) * 2023-02-27 2023-04-04 湖南智连方舟工程科技有限公司 VR engineering detection training method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YANXIANG ZHANG: "An Embedded Virtual Experiment System for Reality Classroom", 《IEEE XPLORE》 *
钟强;: "VR虚拟现实技术在高等教育的应用", 科技风, no. 11 *
雷静: "三维虚拟校园中的角色动画研究与应用", 《万方数据库》 *

Also Published As

Publication number Publication date
CN116540881B (en) 2023-09-08

Similar Documents

Publication Publication Date Title
Tang et al. Evaluating the effectiveness of learning design with mixed reality (MR) in higher education
Kartiko et al. Learning science in a virtual reality application: The impacts of animated-virtual actors’ visual complexity
Ren et al. Design and comparison of immersive interactive learning and instructional techniques for 3D virtual laboratories
Chen et al. Enhancing an instructional design model for virtual reality-based learning
van der Spek Experiments in serious game design: a cognitive approach
Xie et al. Development of a virtual reality safety-training system for construction workers
Aebersold et al. Virtual and augmented realities in nursing education: State of the science
Boyce et al. The impact of surface projection on military tactics comprehension
Nguyen et al. Vrescuer: A virtual reality application for disaster response training
Berki Level of presence in max where virtual reality
Watson et al. Using mixed reality displays for observational learning of motor skills: A design research approach enhancing memory recall and usability
CN116540881B (en) Teaching method and system based on VR
Şahbaz VR-based interactive learning in architectural education: A case on Safranbolu Historical Bathhouse
Lawson et al. Multimodal virtual environments: an opportunity to improve fire safety training?
Boel et al. Applying educational design research to develop a low-cost, mobile immersive virtual reality serious game teaching safety in secondary vocational education
Keenaghan et al. State of the Art of Using Virtual Reality Technologies in Built Environment Education.
KR20220003242A (en) Disaster evacuation training and customized advertisement system based on virtual reality, a training method thereof
Fu et al. A virtual reality–based serious game for fire safety behavioral skills training
Park et al. BIM-basedVirtualRealityandHumanBehaviorSimulation For Safety Design
De Fino et al. Boosting urban community resilience to multi-hazard scenarios in open spaces: A virtual reality–serious game training prototype for heat wave protection and earthquake response
Radoeva et al. An Approach to Development of Virtual Reality Training Systems
Chin et al. The effectiveness of a VR-based mobile learning system for university students to learn geological knowledge
Watson et al. The effect of using animated work instructions over text and static graphics when performing a small scale engineering assembly
Warren et al. Simulations, games, and virtual worlds as mindtools
Siegel Improving distance perception in virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant