CN117750090B - Immersion type virtual training method and device based on electronic card and storage medium - Google Patents

Immersion type virtual training method and device based on electronic card and storage medium Download PDF

Info

Publication number
CN117750090B
CN117750090B CN202410176530.5A CN202410176530A CN117750090B CN 117750090 B CN117750090 B CN 117750090B CN 202410176530 A CN202410176530 A CN 202410176530A CN 117750090 B CN117750090 B CN 117750090B
Authority
CN
China
Prior art keywords
case
node
training
information
student
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410176530.5A
Other languages
Chinese (zh)
Other versions
CN117750090A (en
Inventor
陈默
姬小兵
王劲松
朱晔
任小超
王晓菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yingji Technical Service Beijing Co ltd
Original Assignee
Yingji Technical Service Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yingji Technical Service Beijing Co ltd filed Critical Yingji Technical Service Beijing Co ltd
Priority to CN202410176530.5A priority Critical patent/CN117750090B/en
Publication of CN117750090A publication Critical patent/CN117750090A/en
Application granted granted Critical
Publication of CN117750090B publication Critical patent/CN117750090B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses an immersion type virtual training method, device and storage medium based on an electronic card. Relates to the technical field of online training. The method comprises the following steps: receiving character information related to the training case from a training server; receiving node information from a training server relating to a first scenario node of a training case; based on the node information, displaying a first electronic card on a first student client, and displaying a first interactive interface on the first electronic card; responding to a first trigger operation of a first learner for confirming entering a case on a first interactive interface, and displaying a second interactive interface corresponding to a first case point on a first electronic card; and sending the interaction information input by the first student at the second interaction interface to the training server. The technical problems that an online training system cannot bring true experience to students and taught contents cannot be combined with practice are solved, so that the students are difficult to be impressive, and poor training effect is caused.

Description

Immersion type virtual training method and device based on electronic card and storage medium
Technical Field
The application relates to the technical field of online training, in particular to an immersion type virtual training method and device based on an electronic card and a storage medium.
Background
The online training system is widely applied at present, and the display of factors such as sites can be broken through online training. The students can receive training on line through terminal equipment such as a computer or a mobile phone, so that training efficiency is improved.
At present, a teacher records videos and the trainees watch the videos online; or live broadcast mode is adopted; or by one-to-one online communication with the learner.
However, in any online training mode, video or audio is used as a medium, and knowledge is directly provided for students as a main mode. In this process, the video or audio is provided only to enable the learner to learn about the relevant knowledge points. In this case, it is difficult to bring a realistic experience to the trainee, and since the contents taught through video or audio cannot be combined with practice, it is difficult to impress the trainee, resulting in poor training effect.
The publication number is CN109637256A, and the name is a criminal investigation virtual simulation training teaching application system and a criminal investigation virtual simulation training teaching method. Belonging to the technical field of criminal investigation teaching, discloses a criminal investigation virtual simulation training teaching application system and a criminal investigation virtual simulation training teaching method; the system comprises a student login module for a student to login the system by inputting a correct user name and a correct password; selecting a virtual experiment module of a virtual simulation experiment type to be performed through a virtual simulation experiment list; clicking a command center virtual scene after starting an experiment, and simulating a starting experiment module for reporting a case and receiving an alarm; the on-site investigation module is used for warning and blocking the case scene and extracting and storing the physical evidence; a survey visit module for conducting visit survey on people around the case site; a case analysis module for performing professional analysis on the case details in the virtual command center conference room; a technical investigation module; a material evidence technical module; an interrogation module. According to the invention, through carrying out virtual simulation on the criminal investigation treatment flow, more visual teaching can be carried out on students, and the training effect is improved.
The publication number is CN116110265A, and the name is a visual simulation training method and system based on a digital twin body of a thermal power plant. Firstly, classifying staff in a digital twin body of a thermal power plant according to job position function types of the thermal power plant, and outputting a visual workflow and a position operation object corresponding to the job position function types according to different job position function types; packaging the output visual workflow and post operation objects according to different job post function types to generate employee training data packages corresponding to the different job post function types; when staff of the thermal power plant is trained in positions, the digital twin body of the thermal power plant calls staff training data packages corresponding to job positions and job types to carry out visual simulation training. As the visual simulation training function is additionally arranged in the digital twin body of the thermal power plant, the functions of the digital twin body of the thermal power plant are expanded, and the visual training of equipment and workflow of the thermal power plant is comprehensively realized by applying the digital twin body three-dimensional refined model.
Aiming at the technical problems that the online training system in the prior art cannot bring the actual experience to students and the taught content cannot be combined with practice, the students are difficult to be impressive, and the training effect is poor, no effective solution is proposed at present.
Disclosure of Invention
Embodiments of the present disclosure provide an electronic card-based immersive virtual training method, apparatus, and storage medium. At least, the technical problems that an online training system in the prior art cannot bring true experience to students and taught contents cannot be combined with practice, so that the students are difficult to be impressive, and poor training effect is caused are solved.
According to an aspect of the disclosed embodiments, there is provided an electronic card-based immersive virtual training method for a first terminal device of a first learner, wherein the first terminal device is operated with a first learner client, the method including: receiving character information related to the training case from a training server, wherein the character information is used for indicating characters acted by the first student in the training case; receiving node information related to a first scenario node of a training case from a training server, wherein the node information is used for describing scenario information corresponding to the first scenario node and indicating tasks executed by characters related to the first scenario node at the first scenario node; based on the node information, displaying a first electronic card on a first student client and displaying a first interactive interface on the first electronic card, wherein the first interactive interface displays content related to the case of a first case scenario node; responding to a first trigger operation of a first learner for confirming entering a case on a first interactive interface, and displaying a second interactive interface corresponding to a first case point on a first electronic card; and sending the interaction information input by the first student at the second interaction interface to the training server.
According to another aspect of the embodiments of the present disclosure, there is also provided an immersion type virtual training method based on an electronic card, for a training server, the method including: receiving case configuration information from a terminal device of an administrator, wherein the case configuration information indicates training cases for training, trainees to be trained, and roles received by the respective trainees in the training cases; according to the case configuration information, role information related to the training case is sent to terminal equipment of each student related to the training case, wherein the role information is used for indicating roles played by each student in the training case; determining a current case node from the case nodes of the training cases; determining a role related to a current case node and node information related to the current case node, wherein the node information is used for describing scene information corresponding to the current case node and indicating tasks executed by the role related to the current case node in the current case node; the node information is sent to terminal equipment of students related to the current case node; and receiving interaction information related to the current case node from a terminal device of a learner related to the current case node.
According to another aspect of the embodiments of the present disclosure, there is also provided a storage medium including a stored program, wherein the method of any one of the above is performed by a processor when the program is run.
According to another aspect of the embodiments of the present disclosure, there is also provided an immersion-type virtual training apparatus based on an electronic card, for a first terminal device of a first learner, where the first terminal device operates with a first learner client, the apparatus including: the character information receiving module is used for receiving character information related to the training case from the training server, wherein the character information is used for indicating the first student to take on the character in the training case; the node information receiving module is used for receiving node information related to a first scenario node of a training case from the training server, wherein the node information is used for describing scenario information corresponding to the first scenario node and indicating tasks executed by characters related to the first scenario node at the first scenario node; the first interface display module is used for displaying a first electronic card on the first student client based on the node information and displaying a first interactive interface on the first electronic card, wherein the first interactive interface displays content related to the case of the first case node; the second interface display module is used for responding to the first triggering operation of the first learner for confirming the entering of the case in the first interactive interface and displaying a second interactive interface corresponding to the first case point on the first electronic card; and the sending module is used for sending the interaction information input by the first student at the second interaction interface to the training server.
According to another aspect of the embodiments of the present disclosure, there is also provided an electronic card-based immersive virtual training apparatus for training a server, including: the system comprises a configuration information receiving module, a configuration information processing module and a control module, wherein the configuration information receiving module is used for receiving case configuration information from terminal equipment of an administrator, wherein the case configuration information indicates training cases for training, trainees to be trained and roles received by each trainee in the training cases; the character information sending module is used for sending character information related to the training cases to terminal equipment of each student related to the training cases according to the case configuration information, wherein the character information is used for indicating characters acted by each student in the training cases; the current case node determining module is used for determining the current case node from the case nodes of the training cases; the node information determining module is used for determining roles related to the current case node and node information related to the current case node, wherein the node information is used for describing scene information corresponding to the current case node and indicating tasks executed by the roles related to the current case node in the current case node; the node information sending module is used for sending the node information to terminal equipment of a student related to the current case node; and the interactive information receiving module is used for receiving the interactive information related to the current case node from the terminal equipment of the student related to the current case node.
According to another aspect of the embodiments of the present disclosure, there is further provided an immersion-type virtual training apparatus based on an electronic card, for a first terminal device of a first learner, where the first terminal device operates with a first learner client, including: a first processor; and a first memory, coupled to the first processor, for providing instructions to the first processor to process the steps of: receiving character information related to the training case from a training server, wherein the character information is used for indicating characters acted by the first student in the training case; receiving node information related to a first scenario node of a training case from a training server, wherein the node information is used for describing scenario information corresponding to the first scenario node and indicating tasks executed by characters related to the first scenario node at the first scenario node; based on the node information, displaying a first electronic card on a first student client and displaying a first interactive interface on the first electronic card, wherein the first interactive interface displays content related to the case of a first case scenario node; responding to a first trigger operation of a first learner for confirming entering a case on a first interactive interface, and displaying a second interactive interface corresponding to a first case point on a first electronic card; and sending the interaction information input by the first student at the second interaction interface to the training server.
According to another aspect of the embodiments of the present disclosure, there is also provided an electronic card-based immersive virtual training apparatus for training a server, including: a second processor; and a second memory, coupled to the second processor, for providing instructions to the second processor to process the steps of: receiving case configuration information from a terminal device of an administrator, wherein the case configuration information indicates training cases for training, trainees to be trained, and roles received by the respective trainees in the training cases; according to the case configuration information, role information related to the training case is sent to terminal equipment of each student related to the training case, wherein the role information is used for indicating roles played by each student in the training case; determining a current case node from the case nodes of the training cases; determining a role related to a current case node and node information related to the current case node, wherein the node information is used for describing scene information corresponding to the current case node and indicating tasks executed by the role related to the current case node in the current case node; the node information is sent to terminal equipment of students related to the current case node; and receiving interaction information related to the current case node from a terminal device of a learner related to the current case node.
According to the technical scheme, the training case is divided into a plurality of different scenes according to the development of the training case, and each scene is further divided into a plurality of different case nodes. And at each case node, node information is transmitted to the client of the terminal device of the learner playing the relevant role. Thus, each student completes corresponding tasks according to the situation of the case node, and the training is completed in an immersive mode. Thus, in this way, case training can give a profound impression to the learner, thereby improving the effectiveness of the training. Therefore, the technical problems that an online training system in the prior art cannot bring true experience to students, and the taught content cannot be combined with practice, so that the students are difficult to be impressive, and poor training effect is caused are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the present disclosure, and together with the description serve to explain the present disclosure. In the drawings:
FIG. 1 is a block diagram of a hardware architecture of a computing device for implementing a method according to embodiment 1 of the present disclosure;
FIG. 2A is a schematic diagram of an immersive virtual training system for electronic cards according to embodiment 1 of the present disclosure;
FIG. 2B is a block diagram illustrating a training server 200 according to embodiment 1 of the present disclosure;
FIG. 3A further illustrates a training case with training case 1 as an example;
Fig. 3B is a schematic diagram illustrating the construction of each case scenario by taking the case scenario 1-7 as an example;
FIG. 4 is a block diagram of a learner client according to the first aspect of embodiment 1 of the present disclosure;
FIG. 5 is a flow diagram of an electronic card based immersive virtual training method in accordance with a first aspect of embodiment 1 of the present disclosure;
fig. 6A and 6B are schematic diagrams of a first interactive interface and a second interactive interface of a first electronic card displayed by a first trainee terminal device according to the first aspect of the present embodiment 1;
Fig. 7A and 7B are schematic diagrams of a first interactive interface and a second interactive interface of a first electronic card displayed by a second trainee terminal device according to the first aspect of the present embodiment 1;
Fig. 8 is a schematic diagram showing a second interactive interface of the first electronic card displayed by the first student terminal device in the case where there are a plurality of second students according to the first aspect of the present embodiment 1;
fig. 9 shows a schematic diagram of a second interactive interface of the first electronic card displayed by the first student terminal device in the case where the second student is single according to the first aspect of the present embodiment 1;
fig. 10 is a schematic diagram showing that a learner independent from a current case node plays audio and video of the learner related to the current case node on a terminal device according to the first aspect of the present embodiment 1;
FIG. 11 is a flow diagram of an electronic card based immersive virtual training method in accordance with a second aspect of embodiment 1 of the present disclosure;
Fig. 12 is an example of training case 1 loaded by a case driven engine according to embodiment 1 of the present disclosure;
FIG. 13 is a schematic diagram of a decision model in a decision module corresponding to a decision node according to embodiment 1 of the disclosure;
FIG. 14 is a schematic diagram of an electronic card-based immersive virtual training device in accordance with a first aspect of embodiment 2 of the present disclosure;
FIG. 15 is a schematic illustration of an electronic card-based immersive virtual training device in accordance with a second aspect of embodiment 2 of the present disclosure;
FIG. 16 is a schematic illustration of an electronic card-based immersive virtual training device in accordance with a first aspect of embodiment 3 of the present disclosure;
Fig. 17 is a schematic diagram of an electronic card-based immersive virtual training apparatus in accordance with a second aspect of embodiment 3 of the present disclosure.
Detailed Description
In order to better understand the technical solutions of the present disclosure, the following description will clearly and completely describe the technical solutions of the embodiments of the present disclosure with reference to the drawings in the embodiments of the present disclosure. It will be apparent that the described embodiments are merely embodiments of a portion, but not all, of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure, shall fall within the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to the present embodiment, there is provided a method embodiment of an electronic card-based immersive virtual training method, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
The method embodiments provided by the present embodiments may be performed in a mobile terminal, a computer terminal, a server, or similar computing device. FIG. 1 illustrates a block diagram of a hardware architecture of a computing device for implementing an electronic card-based immersive virtual training method. As shown in fig. 1, the computing device may include one or more processors (which may include, but are not limited to, a microprocessor MCU, a processing device such as a programmable logic device FPGA), memory for storing data, transmission means for communication functions, and input/output interfaces. Wherein the memory, the transmission device and the input/output interface are connected with the processor through a bus. In addition, the method may further include: a display connected to the input/output interface, a keyboard, and a cursor control device. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computing device may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or other data processing circuits described above may be referred to herein generally as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computing device. As referred to in the embodiments of the present disclosure, the data processing circuit acts as a processor control (e.g., selection of the variable resistance termination path to interface with).
The memory may be used to store software programs and modules of application software, such as a program instruction/data storage device corresponding to the electronic card-based immersive virtual training method in the embodiments of the present disclosure, and the processor executes the software programs and modules stored in the memory, thereby executing various functional applications and data processing, that is, implementing the electronic card-based immersive virtual training method of the application program. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory may further include memory remotely located with respect to the processor, which may be connected to the computing device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communications provider of the computing device. In one example, the transmission means includes a network adapter (Network Interface Controller, NIC) that can be connected to other network devices via the base station to communicate with the Internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computing device.
It should be noted herein that in some alternative embodiments, the computing device shown in FIG. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computing devices described above.
Fig. 2A is a schematic diagram of an electronic card based immersive virtual training system in accordance with this embodiment. Referring to fig. 2A, the system includes: terminal apparatuses 100 of manager 410, training server 200, and terminal apparatuses 301 to 30n of students 421 to 42 n.
Wherein fig. 2B shows a block diagram of the training server 200. Referring to FIG. 2B, the training server 200 is deployed with multiple training cases, such that each case can be provided to the trainees 421-42 n in the form of virtual training of the immersive experience. In addition, the training server 200 is also configured with an administrator module, a case configuration module, a trainee interaction module, and a case driven engine. Wherein the administrator module interacts with the administrator client 110 set by the terminal device 100 of the administrator 410, so that training cases can be configured through the case configuration module according to the instruction of the administrator client 110, and the case driving engine is instructed to start or stop the progress of the related cases. The student interaction module interacts with student clients 311-31 n of terminal devices 301-30 n of respective students 421-42 n. The trainee clients 311-31 n provide training environments for immersive virtual training for the trainees 421-42 n in the form of display electronic cards. The case driving engine responds to the interactions of the students 421-42 n according to the content and the flow of the cases distributed to the students 421-42 n, and sends scenario development of the pushed cases to the student clients 301-30 n at different scenario nodes until the results of the cases are reached, thereby bringing the students 421-42 n with immersive experience. In addition, the case driving engine also establishes communication connection between students playing different roles according to the case nodes, wherein the communication connection comprises video connection and audio connection, and the communication discussion audio data between the students playing different roles is used for identification as the basis for judging the next case node.
In addition, fig. 3A further illustrates a schematic diagram of a training case with training case 1 as an example. Referring to fig. 3A, a training case may include, for example, a plurality of case scenarios. In this embodiment, for example, training case 1 is an emergency training for security events for overseas construction. The set scene is that during overseas construction project completion, workers encounter security events, and in this case, the workers in each party deal with the security events to ensure that the workers can be saved safely.
Wherein, for example, case scenario 1 is a scenario of a project manager office; the case scenario 2 is, for example, a scene of a scene where a security event occurs; etc.
In addition, fig. 3B illustrates schematic configuration diagrams of individual case scenarios 1 to 7. Referring to fig. 3B, the case scenario 1 includes a plurality of case nodes N 1a、N1b through N 1n, etc.; the case scene 2 comprises case nodes N 2a, N 2b and the like; scenario of this kind includes case nodes N 7a and N 7b, etc. Because the case scenario scene 1 is an initial scene of the whole training case, the first case scenario node N 1a of the case scenario scene 1 is an initial node of the training case. The first scenario-based node N 2a~N7a of the scenario scenes 2-7 is a scene conversion node and is used for informing students 421-42N that the scenes are converted and starting immersive training in the new scenes. And so on, the first case node of each subsequent scene is a scene conversion node. Further details regarding the case scenarios and the case nodes will be described in detail later.
In addition, fig. 4 further shows a schematic diagram of the student clients 311 to 31n running in the terminal devices 301 to 30 n. Referring to fig. 4, the learner clients 311 to 31n include: a platform interaction module; the electronic card module and the middle module are arranged between the platform interaction module and the electronic card module. The platform interaction module communicates with the trainee interaction module of the training server 200, and specific interaction information is described in detail below. The electronic card module displays an interface in the form of an electronic card at the client 311-31 n, so that the electronic card can interact with the corresponding users 421-42 n. And the platform interaction module can transmit the node information which is received from the training server and is related to the case nodes of the training cases to the electronic card module, so that the electronic card module can construct an electronic card corresponding to the node information according to the received node information.
In addition, an intermediate module between the electronic card module and the platform interaction module, comprising: the system comprises a student interaction module, a text display module, a player module and an audio and video acquisition module. The student interaction module receives trigger information generated by the trigger operation of the student on the electronic card and sends the trigger information to the platform interaction module, so that the platform interaction module can send corresponding interaction information to the student interaction module of the training server 200 according to the trigger information. The text display module is used for receiving the node information from the platform interaction module, extracting text information related to the case of the case node from the node information, and displaying the text information in a text box of the electronic card. Wherein the text information includes: the type of the current case node, the case scene to which the current case node belongs, the task introduction of the case node, and the like. The player module is used for receiving the video information from the platform interaction module and playing the video information in a player embedded in the electronic card. The audio and video acquisition module is used for acquiring audio and video of a user and sending the audio and video to the training server 200 through the platform interaction module. It should be noted that the above hardware configuration may be applied to the terminal device 100, the training server 200, and the terminal devices 301 to 30n in the system.
In the above-described operation environment, according to the first aspect of the present embodiment, there is provided an immersion-type virtual training method based on an electronic card, which is implemented by the terminal device 301 (i.e., first terminal device) of the learner 421 (i.e., first learner) shown in fig. 2A, wherein the terminal device 301 is operated with the learner client 311 (i.e., first learner client) as shown in fig. 2B. Fig. 5 shows a schematic flow chart of the method, and referring to fig. 5, the method includes:
S102: receiving character information related to a training case from a training server, wherein the character information is used for indicating a first character acted by the first student in the training case;
S104: receiving node information related to a first scenario node of a training case from a training server, wherein the node information is used for describing scenario information corresponding to the first scenario node and indicating tasks executed by characters related to the first scenario node at the first scenario node;
S106: based on the node information, displaying a first electronic card on a first student client and displaying a first interactive interface on the first electronic card, wherein the first interactive interface displays content related to the case of a first case scenario node;
S108: responding to a first trigger operation of a first learner for confirming entering a case on a first interactive interface, and displaying a second interactive interface corresponding to a first case point on a first electronic card; and
S110: and sending the interaction information input by the first student at the second interaction interface to the training server.
Specifically, referring to fig. 2A and 2B, in order to train the students 421 to 42n, firstly, the administrator 410 may determine, through the administrator client 110 on the terminal device 100, a training case for training the students 421 to 42n and roles assumed by each of the students 421 to 42n in the training case.
For example, in the present embodiment, the administrator 410 determines the training case 1 as a training case for training the trainees 421 to 42n through the administrator client 110, and in the present embodiment, the training case 1 is an emergency exercise training with respect to a security event for overseas construction. The set scene is that during overseas construction project completion, workers encounter security events, and in this case, the workers in each party deal with the security events to ensure that the workers can be saved safely. Wherein, the student 421 plays a role of security manager, the student 422 plays a role of local security team leader, the students 423 and 424 play a role of local village representative, the student 425 plays a role of project manager, etc. The administrator client 110 thus generates case configuration information for training the trainees 421 to 42n based on the configuration of the administrator 410. The administrator client 110 then interacts with the administrator module of the training server 200 to send the case configuration information to the administrator module of the training server 200.
The training server 200 configures the training case 1 according to the received case configuration information through the case configuration module. And sends the configured training case 1 to the case-driven engine.
Thus, the case driven engine initiates training cases based on the determined training case 1 and the assigned roles of each of the trainees 421-42 n. For example, the case driving engine first sends character information of each student 421 to 42n in training case 1 to the student clients 311 to 31n of the terminal devices 301 to 30n of each student 421 to 42n through the student interaction module.
Thus, the learner client 301 of the terminal device 301 (i.e., the first terminal device) of the learner 421 (i.e., the first learner) receives the character information of the character corresponding to the learner 421 through the platform interaction module (S102).
The case driven engine of training server 200 then initiates training case 1. Wherein the training server 200 determines node information of a case node N 1a (first case node) of the case scene 1 of the training case 1, wherein the node information indicates a role related to the case node N 1a and information related to the case of the case node N 1a, including scenario information of the case node N 1a and tasks performed by the related roles at the case node.
For example, case scenario 1 of training case 1 is a scenario of an item total manager office. Wherein the case driving engine determines node information of the case node N 1a, wherein the node information describes scenario information corresponding to the case node N 1a and indicates tasks performed by respective roles of the case node N 1a. For example, the following table 1 shows node information of the case node N 1a:
TABLE 1
Since the role assigned by the learner 421 is a middle security manager and the role assigned by the learner 422 is a local security captain, the training server 200 transmits the node information to the learner client 311 of the terminal device 301 of the learner 421 and the learner client 312 of the terminal device 302 of the learner 422 through the learner interaction module.
Thus, the platform interaction module of the trainee client 311 of the terminal device 301 of the trainee 421 receives the node information (S104).
Then, referring to fig. 6A, the platform interaction module of the learner client 311 of the terminal device 301 of the learner 421 transmits the node information to the electronic card interface module, so that the electronic card interface module displays the first interactive interface of the corresponding electronic card, and then the platform interaction module inputs the text information corresponding to the node information to the text display module, so that the text display module displays the text information in the text box of the first interactive interface of the electronic card. Accordingly, the learner client 311 displays the electronic card shown in fig. 6A, and displays contents related to the case of the case node on the electronic card (S106).
For example, referring to fig. 6A, the trainee client 311 displays the following on the electronic card: "during overseas completion of construction projects, workers are exposed to security events. As a middle security manager receives emergency communication of a local security team leader and listens to the situation that he reports security events in detail. So that the learner 421 can read contents related to the case of the case node N 1a on the terminal device 301.
Then, in response to the trigger operation of the student 421 clicking "enter" on the first interactive interface of the electronic card shown in fig. 6A (i.e., the first trigger operation), referring to fig. 6B, the student client 311 displays a second interactive interface corresponding to the case node on the electronic card (S108).
Specifically, referring to fig. 6B, the learner client 311 displays a second interactive interface on the electronic card through the electronic card interface module, and then displays a video chat window for performing video chat with the learner 422 on the second interactive interface of the electronic card through the player module. Since the role assigned by learner 422 is a local security captain. Thus, in the video chat window of the electronic card shown in FIG. 6B, a video chat window is shown that is in video chat with the learner 422.
In addition, in this embodiment, according to different node information, the electronic card may display different second interaction interfaces. As will be described in detail later.
The learner 421 then completes the task associated with the case node N 1a with the learner 422 using the video chat window displayed by the learner client 311 of the terminal device 301. Such as listening to the learner 422 through a video chat window to report the security event experienced by the staff, and negotiating to make a next action plan, etc. In this process, the audio/video collection module of the learner client 311 transmits the collected voice audio and video image of the learner 421 as the interactive information to the training server 200 (S110). Until the learner 421 clicks the "complete" button to confirm that the task associated with the case node has been completed.
In this way, in each case node of each case scene, the student client of the terminal device can associate and interact with each student based on the node information related to the case node in the form of an electronic card. For example, although the present embodiment has been described taking the student client 311 of the terminal device 301 of the student 421 as an example. But at the same time the same operation is performed by the end device 302 of the learner 422 corresponding to another character associated with the case node N 1a. Fig. 7A and 7B show the electronic card displayed by the learner client 312 of the terminal device 302 of the learner 422 for the case node N 1a. Referring to FIG. 7A, the client 312 displays on the first interactive interface of the electronic card that the learner 422 plays the role of a local security captain and that the content displayed is: "during overseas completion of construction projects, workers are exposed to security events. As the local security team leader communicates with the middle security manager in an emergency, the middle security manager is reported the security event in detail. After the learner 422 clicks "enter" on the electronic card, the client 312 electronic card displays a second interactive interface, which includes a video chat window for video chat with the learner 421.
Further, although in the present embodiment, the case node N 1a is described as an example, any case node shown in fig. 3A and 3B also realizes a similar operation. And will not be described in detail herein.
As described in the background art, the existing online training mode takes video or audio as medium and directly provides knowledge to students as main mode. In this process, the video or audio is provided only to enable the learner to learn about the relevant knowledge points. In this case, it is difficult to bring a realistic experience to the trainee, and since the contents taught through video or audio cannot be combined with practice, it is difficult to impress the trainee, resulting in poor training effect.
Therefore, according to the technical scheme of the application, the training case is divided into a plurality of different scenes according to the development of the training case, and each scene is further divided into a plurality of different case nodes. And at each case node, node information is transmitted to the client of the terminal device of the learner playing the relevant role. Thus, each student completes corresponding tasks according to the situation of the case node, and the training is completed in an immersive mode. Thus, in this way, case training can give a profound impression to the learner, thereby improving the effectiveness of the training. Therefore, the technical problems that an online training system in the prior art cannot bring true experience to students, and the taught content cannot be combined with practice, so that the students are difficult to be impressive, and poor training effect is caused are solved.
Optionally, displaying, on the first electronic card, a second interactive interface corresponding to the first scenario node, including: displaying a second interactive interface on the first electronic card, wherein the second interactive interface comprises a video chat window for video chat with a second learner, and wherein the second learner plays other roles related to the first scenario point; and receiving the audio and video information of the second student shot by the second terminal of the second student from the training server, and playing the audio and video information through a player associated with the video chat window.
Specifically, according to the node information of the case node N 1a, the learner 421 plays a central security manager, listens to the report of the local security team leader played by the learner 422, and negotiates with the learner 422 to make a next action plan.
Thus, referring to fig. 6B, in this case, while the learner client 311 displays the second interactive interface through the electronic card interface module, the player module of the learner client 311 displays a video chat window for performing video chat with the learner 422 at the second interactive interface. And the player module of the learner client 311 receives the audio and video information of the learner 422 photographed in real time by the terminal device 302 of the learner 422 from the training server 200 through the platform interaction module in real time, thereby displaying in real time in the video chat window.
Therefore, the trainee can communicate in real time by using the terminal equipment such as a mobile phone or a tablet without being limited to the same place, thereby completing the content of case training.
Further optionally, the operation of displaying the second interactive interface on the first electronic card by the second trainee includes: and displaying a second interactive interface on the first electronic card, wherein the second interactive interface comprises video chat windows respectively corresponding to the second students.
Specifically, although in the example of the case node N 1a of the present embodiment, the case node is completed by two students, namely, the student 421 and the student 422. But some nodes are done by more students. For example, in some case nodes, the learner 421 needs to complete with a plurality of other students.
In this case, therefore, as shown with reference to fig. 8, the learner client 311 displays video chat windows corresponding to a plurality of other students, respectively, in the second interactive interface of the electronic card, and displays audio and video information photographed in real time by terminal devices of the respective other students, respectively, in real time in the respective video chat windows.
In this way, the learner 421 can communicate with a plurality of other students through the learner client 311 on the terminal device 301 at the same time, so as to complete the tasks of the corresponding case nodes together.
Optionally, the operation of sending the interaction information input by the first learner at the second interaction interface to the training server includes: and in the process of video communication between the first student and the second student, acquiring audio and video information of the first student, and sending the audio and video information to the training server as interaction information.
Specifically, referring to fig. 4 and 6B, when the learner 421 communicates with the learner 422 at the interactive interface shown in fig. 6B, the audio and video collecting module of the learner client 311 collects video information and audio information of the learner 421 using the camera and the microphone of the terminal device 301, and transmits the video information and the audio information as interactive information to the training server 200.
In addition, during the process of communicating with the learner 421, the audio/video acquisition module of the learner client 312 also acquires the video information and the audio information of the learner 422 by using the camera and the microphone of the terminal device 302, and sends the video information and the audio information as interaction information to the training server 200.
In this manner, the training server 200 may receive audio and video information of each learner from the learner clients of each learner through the learner interaction module. On the one hand, the method can establish audio and video communication among the students, so that the students can communicate in a video chat mode. On the other hand, the audio information of each student may be analyzed for subsequent processing.
Optionally, displaying, on the first electronic card, a second interactive interface corresponding to the first scenario node, including: and displaying a second interactive interface on the first electronic card, wherein the second interactive interface comprises an audio recording window. And transmitting the interaction information input by the first learner at the second interaction interface to the training server, including: and sending the voice audio information recorded by the first student through the audio recording window to the training server as interaction information.
Specifically, although in the example of the case node N 1a of the present embodiment, the case node is completed by two students, namely, the student 421 and the student 422. But some of the case nodes are required to be individually completed (e.g., publishing news information, etc.) by the learner 421, in which case the learner 421 does not need to interact with other students.
In this case, as shown in fig. 9, the client 311 displays an audio recording window on the second interactive interface of the electronic card, so that the learner 421 may input audio information including what measures need to be taken by himself to complete the case node, what news to be released, and so on through the microphone of the terminal device 301. In this way, the learner 421 may input countermeasures taken by the learner at the case node into the terminal device 301 in a voice manner, so that the learner client 311 may collect the voice audio through the audio and video collecting module and transmit the voice audio as the interactive information to the training server 200.
Optionally, the method further comprises: and responding to a second triggering operation of the first student for confirming completion of the task on the second interactive interface, and sending confirmation information of the first student for completing the task related to the first scenario point to the training server.
Referring specifically to fig. 6B, after the learner 421 completes the task related to the case node N 1a (i.e., listens to the report of the learner 422 and negotiates to make a next action plan), a "complete" button (i.e., a second trigger operation) in the second interactive interface may be clicked, so that the learner interactive module of the learner client 311 transmits the trigger to the platform interactive module, and the platform interactive module transmits, in response to the trigger, confirmation information that the learner 421 completes the task related to the case node N 1a to the training server 200.
Thus, after receiving the confirmation information from the student clients of all students related to the case node N 1a, the training server 200 may record and save the interaction information of each student related to the case node according to the confirmation information, and start the next node of the training case 1.
Optionally, the method further comprises: displaying a second electronic card on the first student client and displaying a third interactive interface on the second electronic card, wherein the third interactive interface comprises a playing window for playing audio and video information of a third student, the third student is a student associated with a second case node, and the second case node is not associated with the first student; and receiving the audio and video information of the third student shot by the third terminal equipment of the third student from the training server, and playing the audio and video information through a player associated with a playing window.
Specifically, referring to fig. 10, in the case node (i.e., the second case node) of the case scene of the training case 1, the role (middle security manager) played by the learner 421 is not the role associated with the case node, i.e., the case node does not need the learner 421 to participate.
In this case, the electronic card interface module of the student client 311 of the terminal device 301 displays an electronic card as shown in fig. 10 (i.e., a second electronic card on which "character: none" is displayed, indicating that the student 421 does not participate in the case node.) and displays a third interactive interface on the electronic card. Meanwhile, the player module of the learner client 311 displays a playing window on the third interactive interface, where the playing window is used to play the audio and video information of the learner participating in the case node. Then, the learner client 311 receives the audio and video information of the learner, which is photographed in real time by the terminal device of the learner, from the training server 200 through the platform interaction module, and transmits the audio and video information to the player module. Thus, the player module plays the audio and video information of the student in real time in the playing window.
Therefore, even if a student does not participate in some case nodes, the student can watch the audio and video information of the student related to the case nodes through the electronic card displayed by the student client, so that on one hand, the progress of the case can be known, and on the other hand, the student can watch the performances of the students related to the case nodes as audience. Avoiding the process of waiting for own case node to be too boring.
Optionally, the method further comprises: displaying a third electronic card on the first student client and displaying a fourth interactive interface on the third electronic card, wherein the fourth interactive interface comprises a video chat window for performing video chat with a training manager; and receiving audio and video information of the training manager shot by the fourth terminal equipment of the training manager from the training server, and playing the audio and video information through a player associated with the video chat window.
Specifically, although not shown in the drawing, after the training case is ended, the electronic card interface module of the trainee client 311 of the terminal device 301 of the trainee 421 displays an electronic card (i.e., a third electronic card) and displays an interactive interface (i.e., a fourth interactive interface) on the electronic card, while the player module displays a video chat window for performing video chat with the administrator 410 on the interactive interface. Then, the learner client 311 receives the audio and video of the administrator 410 photographed in real time by the terminal device 100 from the training server 200 in real time through the platform interaction module, and plays it in the chatting window through the player module.
Thus, in this manner, after the case training is completed, the learner 421 may communicate with the administrator 410, and the administrator 410 guides and critizes the learner 421.
Further, according to a second aspect of the present embodiment, there is provided a training method implemented by the training server 200 shown in fig. 2A and 2B. Fig. 11 shows a schematic flow chart of the method, and referring to fig. 11, the method includes:
S202: receiving case configuration information from a terminal device of an administrator, wherein the case configuration information indicates training cases for training, trainees to be trained, and roles received by the respective trainees in the training cases;
S204: according to the case configuration information, role information related to the training case is sent to terminal equipment of a student related to the training case;
S206: determining a current case node from the case nodes of the training cases;
S208: determining a role related to a current case node and node information related to the current case node, wherein the node information is used for describing scene information corresponding to the current case node and indicating tasks executed by the role related to the current case node in the current case node;
s210: the node information is sent to terminal equipment of students related to the current case node; and
S212: interaction information related to the current case node is received from a terminal device of a learner related to the current case node.
For example, as described above, the administrator 410 may interact with the administrator module of the training server 200 through the administrator client 110 on the terminal device 100. Thereby transmitting the case configuration information of training case 1 to server 200. The server 200 thus receives case configuration information through the administrator module (S202).
The training server 200 then configures the training case 1 according to the received case configuration information through the case configuration module. And sends the configured training case 1 to the case-driven engine.
Thus, the case driving engine first transmits character information in training case 1 with each of the students 421 to 42n to the student clients 311 to 31n of the terminal devices 301 to 30n of each of the students 421 to 42n (S204).
The case driven engine of training server 200 then initiates training case 1. Thus, referring to fig. 3B, in driving the training case 1, the training server 200 first determines a current case node to be currently executed from among the case nodes shown in fig. 3B (S206). For example, the training server 200 initially determines the first case node N 1a of the case scenario 1 as the current case node. Of course, as training case 1 advances, the training server 200 will continuously determine the following case node as the current case node, thereby advancing the training case 1.
The case driven engine of the training server 200 then determines the roles associated with the first scenario node N 1a and the node information associated with the first scenario node N 1a. Wherein the node information indicates the scenario information of the first case node N 1a and the task performed by the related character at the case node (S208).
Then, the training server 200 transmits the node information related to the first scenario node N 1a to the learner client 311 of the terminal device 301 of the learner 421 and the learner client 312 of the terminal device 302 of the learner 422 related to the first scenario node N 1a through the learner interaction module (S210).
And, referring to fig. 6A and 6B and fig. 7A and 7B, the training server 200 establishes an audio and video channel of a video chat between the client 311 and the client 312 through the case driving engine, so that the learner 421 and the learner 422 can perform the video chat through the video chat window of the electronic card displayed by the learner clients 311 and 312 of the respective terminal devices 301 and 302.
Then, the training server 200 receives the audio and video information of the video chat as interactive information from the trainee client 311 and the trainee client 312 through the trainee interactive module (S212). Until completion information triggered by completion of the student 421 and 422 at the respective client-point clicks is received from the student client 311 and the student client 312, the training server 200 then continues to perform the above-described operations on the clients of the students associated with the case node N 1b according to the next case node N 1b.
Optionally, in the case that the learner related to the current case node is a plurality of students, after sending the node information to the terminal devices of the plurality of students, the method further includes: a communication connection for video chat is established between a plurality of students. And, an operation of receiving interaction information related to the current case node from a terminal device of a learner related to the current case node, comprising: and receiving audio and video information input by a plurality of students in the video chat process from terminal equipment of the plurality of students as interaction information.
As described above, referring to fig. 6A and 6B, fig. 7A and 7B, and fig. 8, in the case where the number of students participating in the current case node is a plurality (i.e., not less than 2), the training server 200 establishes a communication connection for video chat between the terminal devices of the students related to the current case node through the student interaction module. And, the training server 200 receives audio and video information of each student in the video chat process from the terminal devices of the plurality of students as interactive information.
Optionally, in the case that the learner related to the current case node is a single learner, the operation of receiving the interaction information related to the current case node from the terminal device of the learner related to the current case node includes: and receiving audio information which is input by the single student and is related to the current case node from the terminal equipment of the single student as interaction information.
As described above, referring to fig. 9, in case that a learner related to the current case node is a single learner, the learner interaction module of the training server 200 directly receives audio information input in real time by the learner from the terminal device of the single learner as interaction information.
Optionally, the method further comprises: and sending the interaction information to terminal equipment of students which are not related to the current case node.
As described above with reference to fig. 10, the trainee interaction module of the training server 200 also transmits the received interaction information (e.g., the audio-visual information or the audio information) to the trainee's terminal device not related to the current case node. Therefore, students not related to the current case node can watch the process of executing tasks by the students related to the current case node through the terminal equipment.
Optionally, the method further comprises: receiving confirmation information of completing tasks related to the current case node from terminal equipment of students related to the current case node; generating interactive text information corresponding to the interactive information in response to the confirmation information; coding the interactive text information by using a coding model based on an attention mechanism to generate node characteristic information corresponding to the current case node; and starting a subsequent case node of the current case node.
Specifically, referring to fig. 6B, 7B, 8 and 9, a learner participating in the current case node may click a "complete" button on the terminal device, for example, to transmit confirmation information that the learner has completed the task of the current case node to the training server 200. The training server 200 thus receives the confirmation information through the trainee interaction module and transmits the confirmation information to the case driven engine. The case driver engine further processes the confirmation information after receiving the confirmation information.
Wherein, referring to fig. 12, when driving training cases, the case driving engine loads the examples corresponding to the corresponding training cases. For example, the training server 200 loads an instance corresponding to training case 1 when driving training case 1. Wherein fig. 12 illustrates a schematic diagram of an example of a training case, taking the example of training case 1 as an example.
Referring to fig. 12, an example training case 1 includes a speech recognition module and an encoding module. Thus, for example, after the training server 200 receives confirmation information that the corresponding learner has completed the task related to the first case node N 1a from the terminal devices 301 and 302, the voice audio in the interactive information of the learner 421 and the learner 422 may be recognized through the voice recognition module, and the voice audio of the learner 421 and the learner 422 may be recognized, so as to generate corresponding interactive text information. For example, the following table 2 shows interactive text information corresponding to the first case node N 1a:
TABLE 2
In which, since the learner 421 and the learner 422 communicate through the form of video chat, the learner interaction module of the training server 200 receives the voice audios of the learner 421 and 422 from the terminal devices 301 and 302 of the learner 421 and 422 alternately in time sequence, so that the voice recognition module in the example of the training case 1 alternately generates the interaction text information X i (i= 1~I) corresponding to the learner 421 and the learner 422 in time sequence according to the received voice audios. Wherein, preferably, the interactive text information X i is, for example, text information after preprocessing. The pretreatment includes, for example: segmentation, removal of stop words, and so on. So that the interactive text information X i is composed of a plurality of segmentations.
Then, the coding module of the training case 1 example uses the coding model based on the attention mechanism to code the interactive text information, and generates node characteristic information Ft 1 corresponding to the node N 1a.
Specifically, firstly, the encoding module generates corresponding interactive text features Fx i for each interactive text message X i by using a preset encoding model. Specifically, as shown in the following table 3:
TABLE 3 Table 3
Wherein Fx i={fxi,1,fxi,2,fxi,3,.. } wherein Fx i,1,fxi,2,fxi,3,..is a word vector corresponding to each word segment in the interactive text information X i.
The attention-based coding model may be, for example, a multi-layer Transformer layer (transducer) based coding model. Preferably, for example, a BERT-based coding model. The encoding module then concatenates each of the interactive text features Fx i to generate node feature information Ft 1 corresponding to the node N 1a.
Thus, in this way, the case driving engine of the training server 200 can generate node characteristic information corresponding to a case node based on all the interaction information corresponding to the case node, for example, after receiving confirmation information for completing a task from the terminal devices of respective students associated with the case node. So that the node characteristic information can be used for subsequent decision operations.
The case driven engine then starts the next node to the current case node. For example, the case node N 1b is started.
Optionally, when the current case node interfaces with a plurality of parallel follow-up case nodes and the plurality of follow-up case nodes and the current case node belong to the same case scene, the operation of the follow-up case node of the current case node is started, including: and determining a target case node to be started from a plurality of follow-up case nodes by using a first judging model corresponding to the first judging node according to the node characteristic information.
Specifically, referring to fig. 3B, the training case 1 includes a plurality of decision nodes 1 to 3. Thus, referring to fig. 12, the training case 1 includes, in an example, determination modules 1 to 3 corresponding to the determination nodes 1 to 3, respectively, for executing the determination operations corresponding to the determination nodes 1 to 3.
Thus, for example, in the case where the current case node is case node N 1b, the case node N 1b interfaces with a plurality of parallel subsequent case nodes N 1c and N 1d. And the following case nodes N 1c and N 1d belong to the same case scene 1 as the current case node N 1b.
In this case, in order to determine a target subsequent node to be started after the case node N 1b, the determination module 1 corresponding to the determination node 1 inputs the node characteristic information Ft 2 of the case node N 1b to a previously set neural network-based determination model (i.e., a first determination model), thereby determining the target subsequent node.
Wherein fig. 13 shows a schematic diagram of the first decision model. Referring to fig. 13, the model includes an input layer, a hidden layer, and an output layer of the neural network, and a softmax classifier connected to the output layer. Wherein the output layer is the same as the number of elements of the softmax classifier. Referring to fig. 3B, since the case node N 1b is abutted to two subsequent case nodes N 1c and N 1d, the classification vector output by the softmax classifier also includes two elements, which respectively represent the probability that the two subsequent case nodes N 1c are target case nodes and the probability that N 1d are target case nodes. Therefore, the judging module 1 selects the case node with higher probability as the target case node.
In addition, in the case that the number of subsequent case nodes that the current case node is docked with is greater, the number of output layers and softmax classifiers also corresponds to the number of subsequent case nodes.
In this way, the target case node to be started can be determined from a plurality of subsequent case nodes in butt joint with the current case node by using the neural network-based determination model according to the communication negotiation conditions of each student in the current case node.
Optionally, when the current case node is the last case node of the case scene to which the current case node belongs, the current case node is abutted to a plurality of parallel follow-up case nodes, and the plurality of follow-up case nodes belong to different case scenes, the operation of the follow-up case node of the current case node is started, including: determining weight values of all the case nodes in the same case scene according to the distances between the case nodes belonging to the same case scene as the current case node and the current case node; weighting node characteristic information corresponding to each case node in the same case scene according to the weight value; and determining a target case node to be started from a plurality of subsequent case nodes by using a second judging model corresponding to the second judging node according to the weighted node characteristic information.
Specifically, referring to fig. 3B, in the case where the current case node is N 1m or N 1n, the next case node of the current case node is the initial case node (i.e., the scene transition node) of the new case scene. It is therefore necessary to determine that after all of the case scenarios described by the current case node have been executed, it is necessary to interface with that case scenario.
Taking N 1m as an example for illustration, in the case where the current case node is N 1m. The determining module 2 (i.e., the second determining module) determines a serial case node in serial relation with the current case node N 1m from among the case nodes of the current case scene (i.e., the case scene 1). In this embodiment, it is assumed that the serial case node and the current case node N 1m together form a set ns= { Ns 1,Ns2,Ns3, ...,NsJ } of case nodes.
Then, the determining module 2 determines the weights of the serial case nodes and the current case node according to the distances between the serial case nodes and the case node N 1m (i.e., the number of edges between the serial case nodes and the current case node N 1m (i.e., the connection lines between the nodes). The greater the distance between the serial node and the current case node is, the smaller the weight value is.
Specifically, the weight corresponding to each serial case node may be determined by the following formula:
g=k*d+b;(1)
;(2)
Wherein d is the total number of edges between the serial case node and the current case node N 1m, and represents the distance between the serial case node and the current case node, where d is equal to 0 for the previous case node N 1m itself; k and b are linear parameters that can be determined using a gradient descent method.
Further, preferably, the processing of the softmax function may be performed on the weight values of the respective serial case nodes and the current case node calculated using the formulas (1) and (2) such that the sum of the weights corresponding to the respective serial case nodes is 1.
Then, the determination module 2 performs weighting processing on the node characteristic information Fs 1~FsJ of each case node Ns 1~NsJ by using the calculated weight w j { j= 1~J }, to obtain weighted node characteristic information Fw 1~FwJ.
Then, the decision module 2 determines a target subsequent node to be driven according to the weighted node characteristic information Fw 1~FwJ using a neural network structure similar to that of fig. 13, in which the number of elements of the output layer and softmax classifier is determined according to the number of subsequent nodes that the current nodes of the case interface.
Therefore, the technical scheme of the invention can accurately determine the target case scene to be driven according to the node characteristic information of each case node in the scene of the current case node.
Further, referring to fig. 1, according to a third aspect of the present embodiment, there is provided a storage medium. The storage medium includes a stored program, wherein the method of any one of the above is performed by a processor when the program is run.
According to the technical scheme, the training case is divided into a plurality of different scenes according to the development of the training case, and each scene is further divided into a plurality of different case nodes. And at each case node, node information is transmitted to the client of the terminal device of the learner playing the relevant role. Thus, each student completes corresponding tasks according to the situation of the case node, and the training is completed in an immersive mode. Thus, in this way, case training can give a profound impression to the learner, thereby improving the effectiveness of the training. Therefore, the technical problems that an online training system in the prior art cannot bring true experience to students, and the taught content cannot be combined with practice, so that the students are difficult to be impressive, and poor training effect is caused are solved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 2
Fig. 14 shows an electronic card-based immersive virtual training apparatus 1400 in accordance with a first aspect of the present embodiment for a first terminal device of a first learner, wherein the first terminal device is operated with a first learner client, the apparatus 1400 corresponding to the method in accordance with the first aspect of embodiment 1. Referring to fig. 14, the apparatus 1400 includes: a character information receiving module 1410 for receiving character information related to a training case from a training server, wherein the character information is used to indicate a character that a first learner plays in the training case; a node information receiving module 1420 for receiving node information related to a first scenario node of a training case from a training server, wherein the node information is used for describing scenario information corresponding to the first scenario node and indicates a task performed by a character related to the first scenario node at the first scenario node; the first interface display module 1430 is configured to display a first electronic card on the first learner client based on the node information, and display a first interactive interface on the first electronic card, where the first interactive interface displays content related to a case of the first scenario node; the second interface display module 1440 is configured to display a second interactive interface corresponding to the first scenario node on the first electronic card in response to the first trigger operation of the first learner confirming entry into the first scenario at the first interactive interface; and a transmitting module 1450, configured to transmit the interaction information input by the first learner at the second interaction interface to the training server.
Optionally, the second interface display module 1440 includes: the first display sub-module is used for displaying a second interaction interface on the first electronic card, wherein the second interaction interface comprises a video chat window for performing video chat with a second student, and the second student plays other roles related to the first scenario node; and the first receiving sub-module is used for receiving the audio and video information of the second student shot by the second terminal equipment of the second student from the training server and playing the audio and video information through a player associated with the video chat window.
Optionally, the second trainee is plural, such that the first display sub-module includes: and the second display sub-module is used for displaying a second interactive interface on the first electronic card, wherein the second interactive interface comprises video chat windows corresponding to the second students respectively.
Optionally, the sending module 1450 includes: the first acquisition sub-module is used for acquiring audio and video frequency information of the first student in the process of video communication between the first student and the second student, and sending the audio and video frequency information to the training server as interaction information.
Optionally, the second interface display module 1440 includes: a third display sub-module for displaying a second interactive interface on the first electronic card, wherein the second interactive interface comprises an audio recording window, and
A transmitting module 1450, comprising: a first transmitting sub-module for transmitting the voice audio information recorded by the first learner through the audio recording window as interactive information to the training server, and wherein,
The apparatus 1400 further comprises: a confirmation module for responding to the second triggering operation of the first learner to complete the task at the second interactive interface, sending confirmation information of the first learner to complete the task related to the first scenario point to the training server, and wherein,
The apparatus 1400 further comprises: the third interface display module is used for displaying a second electronic card at the first student client and displaying a third interactive interface at the second electronic card, wherein the third interactive interface comprises a playing window for playing audio and video information of a third student, the third student is a student associated with a second case node, and the second case node is not associated with the first student; and the first playing module is used for receiving the audio and video information of the third student shot by the third terminal equipment of the third student from the training server and playing the audio and video information through a player associated with a playing window.
The apparatus 1400 further comprises: the fourth interface display module is used for displaying a third electronic card on the first student client and displaying a fourth interactive interface on the third electronic card, wherein the fourth interactive interface comprises a video chat window for performing video chat with a training manager; and the second playing module is used for receiving the audio and video information of the training manager shot by the fourth terminal equipment of the training manager from the training server and playing the audio and video information through a player associated with the video chat window.
Further, fig. 15 shows an electronic card-based immersive virtual training apparatus 1500 for training servers according to the second aspect of the present embodiment, the apparatus 1500 corresponding to the method according to the second aspect of embodiment 1. Referring to fig. 15, the apparatus 1500 includes: a configuration information receiving module 1510 for receiving case configuration information from a terminal device of an administrator, wherein the case configuration information indicates training cases for training, trainees to be trained, and roles received in the training cases by the respective trainees; a role information transmitting module 1520, configured to transmit role information related to a training case to terminal devices of respective students related to the training case according to case configuration information, where the role information is used to indicate roles played by the respective students in the training case; the current case node determining module 1530 is configured to determine a current case node from the case nodes in the training case; the node information determining module 1540 is configured to determine a role related to a current case node and node information related to the current case node, where the node information is used to describe context information corresponding to the current case node and indicates a task executed by the role related to the current case node at the current case node; a node information transmitting module 1550, configured to transmit node information to a terminal device of a learner related to the current case node; and an interaction information receiving module 1560 for receiving interaction information related to the current case node from a terminal device of a learner related to the current case node.
The apparatus 1500 further comprises: the confirmation information receiving module is used for receiving confirmation information which is completed on the task related to the current case node from the terminal equipment of the student related to the current case node; the first generation module is used for responding to the confirmation information and generating interactive text information corresponding to the interactive information; the second generation module is used for encoding the interactive text information by using an encoding model based on an attention mechanism and generating node characteristic information corresponding to the current case node; and the node starting module is used for starting the follow-up case nodes of the current case node.
Optionally, when the current case node interfaces a plurality of parallel follow-up case nodes and the plurality of follow-up case nodes and the current case node belong to the same case scene, the node starting module includes: and the first determining submodule is used for determining a target case node to be started from a plurality of follow-up case nodes by utilizing a first judging model corresponding to the first judging node according to the node characteristic information.
Optionally, when the current case node is the last case node of the case scene to which the current case node belongs, the current case node is in butt joint with a plurality of parallel follow-up case nodes, and the plurality of follow-up case nodes belong to different case scenes, the node starting module includes: the second determining submodule is used for determining weight values of all the case nodes in the same case scene according to the distance between the case node belonging to the same case scene as the current case node and the current case node; the weighting sub-module is used for weighting the node characteristic information corresponding to each case node in the same case scene according to the weight value; and a third determining sub-module, configured to determine, according to the weighted node feature information, a target case node to be started from a plurality of subsequent case nodes by using a second determination model corresponding to the second determination node.
Therefore, according to the technical scheme of the embodiment, the training case is divided into a plurality of different scenes and each scene is further divided into a plurality of different case nodes according to the development of the training case. And at each case node, node information is transmitted to the client of the terminal device of the learner playing the relevant role. Thus, each student completes corresponding tasks according to the situation of the case node, and the training is completed in an immersive mode. Thus, in this way, case training can give a profound impression to the learner, thereby improving the effectiveness of the training. Therefore, the technical problems that an online training system in the prior art cannot bring true experience to students, and the taught content cannot be combined with practice, so that the students are difficult to be impressive, and poor training effect is caused are solved.
Example 3
Fig. 16 shows an electronic card-based immersive virtual training apparatus 1600 for a first terminal device of a first learner, wherein the first terminal device is operated with a first learner client, according to the first aspect of the present embodiment, the apparatus 1600 corresponding to the method according to the first aspect of embodiment 1. Referring to fig. 16, the apparatus 1600 includes: a first processor 1610; and a first memory 1620, coupled to the first processor 1610, for providing instructions to the first processor 1610 for processing steps of: receiving character information related to the training case from a training server, wherein the character information is used for indicating characters acted by the first student in the training case; receiving node information related to a first scenario node of a training case from a training server, wherein the node information is used for describing scenario information corresponding to the first scenario node and indicating tasks executed by characters related to the first scenario node at the first scenario node; based on the node information, displaying a first electronic card on a first student client and displaying a first interactive interface on the first electronic card, wherein the first interactive interface displays content related to the case of a first case scenario node; responding to a first trigger operation of a first learner for confirming entering a case on a first interactive interface, and displaying a second interactive interface corresponding to a first case point on a first electronic card; and sending the interaction information input by the first student at the second interaction interface to the training server.
Optionally, displaying, on the first electronic card, a second interactive interface corresponding to the first scenario node, including: displaying a second interactive interface on the first electronic card, wherein the second interactive interface comprises a video chat window for video chat with a second learner, and wherein the second learner plays other roles related to the first scenario point; and receiving the audio and video information of the second student shot by the second terminal equipment of the second student from the training server, and playing the audio and video information through a player associated with the video chat window.
Optionally, the operation of displaying the second interactive interface on the first electronic card by the plurality of second students includes: and displaying a second interactive interface on the first electronic card, wherein the second interactive interface comprises video chat windows respectively corresponding to the second students.
Optionally, the operation of sending the interaction information input by the first learner at the second interaction interface to the training server includes: and in the process of video communication between the first student and the second student, acquiring audio and video information of the first student, and sending the audio and video information to the training server as interaction information.
Optionally, displaying, on the first electronic card, a second interactive interface corresponding to the first scenario node, including: displaying a second interactive interface on the first electronic card, wherein the second interactive interface comprises an audio recording window, and sending the interactive information input by the first learner at the second interactive interface to the training server, wherein the operation comprises the following steps: the voice audio information recorded by the first learner through the audio recording window is transmitted to the training server as interactive information, and wherein,
The first processor 1610 is further configured to provide instructions to the first processor 1610 for processing the following processing steps: in response to a second trigger operation of the first learner confirming completion of the task at the second interactive interface, sending confirmation information of the first learner completing the task related to the first scenario node to the training server, and wherein,
The first processor 1610 is further configured to provide instructions to the first processor 1610 for processing the following processing steps: displaying a second electronic card on the first student client and displaying a third interactive interface on the second electronic card, wherein the third interactive interface comprises a playing window for playing audio and video information of a third student, the third student is a student associated with a second case node, and the second case node is not associated with the first student; and receiving the audio and video information of the third student shot by the third terminal equipment of the third student from the training server, and playing the audio and video information through a player associated with a playing window.
Optionally, the first processor 1610 is further configured to provide the first processor 1610 with instructions for processing the following processing steps: displaying a third electronic card on the first student client and displaying a fourth interactive interface on the third electronic card, wherein the fourth interactive interface comprises a video chat window for performing video chat with a training manager; and receiving audio and video information of the training manager shot by the fourth terminal equipment of the training manager from the training server, and playing the audio and video information through a player associated with the video chat window.
Further, fig. 17 shows an electronic card-based immersive virtual training apparatus 1700 for training servers in accordance with the second aspect of the present embodiment, the apparatus 1700 corresponding to the method of the second aspect of embodiment 1. Referring to fig. 17, the apparatus 1700 includes: a second processor 1710; and a second memory 1720 coupled to the second processor 1710 for providing instructions to the second processor 1710 for processing steps of: receiving case configuration information from a terminal device of an administrator, wherein the case configuration information indicates training cases for training, trainees to be trained, and roles received by the respective trainees in the training cases; according to the case configuration information, role information related to the training case is sent to terminal equipment of each student related to the training case, wherein the role information is used for indicating roles played by each student in the training case; determining a current case node from the case nodes of the training cases; determining a role related to a current case node and node information related to the current case node, wherein the node information is used for describing scene information corresponding to the current case node and indicating tasks executed by the role related to the current case node in the current case node; the node information is sent to terminal equipment of students related to the current case node; and receiving interaction information related to the current case node from a terminal device of a learner related to the current case node.
Optionally, the second processor 1710 is further configured to provide instructions for the second processor 1710 to process the following processing steps: receiving confirmation information of completing tasks related to the current case node from terminal equipment of students related to the current case node; generating interactive text information corresponding to the interactive information in response to the confirmation information; coding the interactive text information by using a coding model based on an attention mechanism to generate node characteristic information corresponding to the current case node; and starting a subsequent case node of the current case node.
Optionally, when the current case node interfaces with a plurality of parallel follow-up case nodes and the plurality of follow-up case nodes and the current case node belong to the same case scene, the operation of the follow-up case node of the current case node is started, including: and determining a target case node to be started from a plurality of follow-up case nodes by using a first judging model corresponding to the first judging node according to the node characteristic information.
Optionally, when the current case node is the last case node of the case scene to which the current case node belongs, the current case node is abutted to a plurality of parallel follow-up case nodes, and the plurality of follow-up case nodes belong to different case scenes, the operation of the follow-up case node of the current case node is started, including: determining weight values of all the case nodes in the same case scene according to the distances between the case nodes belonging to the same case scene as the current case node and the current case node; weighting node characteristic information corresponding to each case node in the same case scene according to the weight value; and determining a target case node to be started from a plurality of subsequent case nodes by using a second judging model corresponding to the second judging node according to the weighted node characteristic information.
Therefore, according to the technical scheme of the embodiment, the training case is divided into a plurality of different scenes and each scene is further divided into a plurality of different case nodes according to the development of the training case. And at each case node, node information is transmitted to the client of the terminal device of the learner playing the relevant role. Thus, each student completes corresponding tasks according to the situation of the case node, and the training is completed in an immersive mode. Thus, in this way, case training can give a profound impression to the learner, thereby improving the effectiveness of the training. Therefore, the technical problems that an online training system in the prior art cannot bring true experience to students, and the taught content cannot be combined with practice, so that the students are difficult to be impressive, and poor training effect is caused are solved.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (8)

1. An electronic card-based immersive virtual training method for a first terminal device of a first learner, wherein the first terminal device is operated with a first learner client, comprising:
receiving character information related to a training case from a training server, wherein the character information is used for indicating characters acted by the first student in the training case;
Receiving node information related to a first scenario node of the training case from the training server, wherein the node information is used for describing scenario information corresponding to the first scenario node and indicating tasks executed by roles related to the first scenario node at the first scenario node;
Based on the node information, displaying a first electronic card on the first student client and displaying a first interactive interface on the first electronic card, wherein the first interactive interface displays content related to the case of the first case node;
Responding to a first triggering operation of the first student for confirming entering a case on the first interactive interface, and displaying a second interactive interface corresponding to the first case point on the first electronic card; and
Transmitting the interaction information input by the first student at the second interaction interface to the training server, and
The operation of displaying a second interactive interface corresponding to the first scenario node on the first electronic card comprises the following steps: displaying the second interactive interface on the first electronic card, wherein the second interactive interface comprises an audio recording window, and
The operation of sending the interaction information input by the first student at the second interaction interface to the training server comprises the following steps: transmitting the voice audio information recorded by the first learner through the audio recording window as the interactive information to the training server, and wherein,
The method further comprises the steps of: responsive to a second trigger operation by the first learner confirming completion of the task at the second interactive interface, sending confirmation information to the training server that the first learner completed the task related to the first scenario node, and wherein,
The method further comprises the steps of:
displaying a second electronic card on the first student client and displaying a third interactive interface on the second electronic card, wherein the third interactive interface comprises a playing window for playing audio and video information of a third student, the third student is a student associated with a second case node, and the second case node is not associated with the first student; and
And receiving the audio and video information of the third student shot by the third terminal equipment of the third student from the training server, and playing the audio and video information through a player associated with the playing window.
2. The method of claim 1, wherein displaying a second interactive interface corresponding to the first scenario node on the first electronic card comprises:
displaying the second interactive interface on the first electronic card, wherein the second interactive interface comprises a video chat window for video chat with a second learner, and wherein the second learner plays other roles related to the first scenario node; and
And receiving the audio and video information of the second student shot by the second terminal equipment of the second student from the training server, and playing the audio and video information through a player associated with the video chat window.
3. The method of claim 2, wherein the second learner is a plurality of to display the second interactive interface on the first electronic card, comprising: and displaying the second interactive interface on the first electronic card, wherein the second interactive interface comprises video chat windows respectively corresponding to the second students.
4. A method according to claim 2 or 3, wherein the operation of sending the interaction information input by the first learner at the second interaction interface to the training server comprises:
And in the process of video communication between the first student and the second student, acquiring audio and video information of the first student, and sending the audio and video information to the training server as the interaction information.
5. The method as recited in claim 1, further comprising:
Displaying a third electronic card on the first student client and displaying a fourth interactive interface on the third electronic card, wherein the fourth interactive interface comprises a video chat window for performing video chat with a training manager; and
And receiving the audio and video information of the training manager shot by the fourth terminal equipment of the training manager from the training server, and playing the audio and video information through a player associated with the video chat window.
6. An immersion type virtual training method based on an electronic card, which is used for a training server, and is characterized by comprising the following steps:
Receiving case configuration information from a terminal device of an administrator, wherein the case configuration information indicates training cases for training, trainees to be trained and roles received by each trainee in the training cases;
according to the case configuration information, role information related to the training case is sent to terminal equipment of each student related to the training case, wherein the role information is used for indicating roles played by each student in the training case;
determining a current case node from the case nodes of the training cases;
Determining a role related to the current case node and node information related to the current case node, wherein the node information is used for describing scene information corresponding to the current case node and indicating tasks executed by the role related to the current case node in the current case node;
The node information is sent to terminal equipment of a student related to the current case node; and
Receiving interactive information related to the current case node from a terminal device of a learner related to the current case node, and
When the current case node is the last case node of the case scene to which the current case node belongs, the current case node is butted with a plurality of parallel follow-up case nodes, and the follow-up case nodes belong to different case scenes, the operation of the follow-up case nodes of the current case node is started, including:
Determining weight values of all the case nodes in the same case scene according to the distances between the case nodes belonging to the same case scene as the current case node and the current case node;
Weighting node characteristic information corresponding to each case node in the same case scene according to the weight value; and
And determining a target case node to be started from the plurality of follow-up case nodes by using a second judging model corresponding to the second judging node according to the weighted node characteristic information.
7. The method as recited in claim 6, further comprising:
receiving confirmation information of completing tasks related to the current case node from terminal equipment of a student related to the current case node;
generating interactive text information corresponding to the interactive information in response to the confirmation information;
coding the interactive text information by using a coding model based on an attention mechanism to generate node characteristic information corresponding to the current case node; and
And starting a subsequent case node of the current case node.
8. The method of claim 7, wherein initiating operation of a subsequent case node of the current case node if the current case node interfaces with a plurality of subsequent case nodes in parallel and the plurality of subsequent case nodes belong to the same case scene as the current case node, comprises:
And determining a target case node to be started from the plurality of follow-up case nodes by using a first judging model corresponding to the first judging node according to the node characteristic information.
CN202410176530.5A 2024-02-08 2024-02-08 Immersion type virtual training method and device based on electronic card and storage medium Active CN117750090B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410176530.5A CN117750090B (en) 2024-02-08 2024-02-08 Immersion type virtual training method and device based on electronic card and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410176530.5A CN117750090B (en) 2024-02-08 2024-02-08 Immersion type virtual training method and device based on electronic card and storage medium

Publications (2)

Publication Number Publication Date
CN117750090A CN117750090A (en) 2024-03-22
CN117750090B true CN117750090B (en) 2024-05-03

Family

ID=90251151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410176530.5A Active CN117750090B (en) 2024-02-08 2024-02-08 Immersion type virtual training method and device based on electronic card and storage medium

Country Status (1)

Country Link
CN (1) CN117750090B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102137006B1 (en) * 2019-11-14 2020-07-23 장봉조 Safety education training system using virtual reality device and method controlling thereof
CN112837573A (en) * 2021-01-11 2021-05-25 广东省交通运输高级技工学校 Game teaching platform and method
KR102339454B1 (en) * 2020-12-09 2021-12-15 인플랩 주식회사 Serious game system for disaster response education and training
CN114797101A (en) * 2022-04-11 2022-07-29 平安科技(深圳)有限公司 Publishing method, device, equipment and storage medium of training game resources
US11474596B1 (en) * 2020-06-04 2022-10-18 Architecture Technology Corporation Systems and methods for multi-user virtual training
CN115599471A (en) * 2021-06-28 2023-01-13 中国石油化工股份有限公司(Cn) Game course design system and method based on accident analysis

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170018200A1 (en) * 2015-01-07 2017-01-19 Ruth Nemire Method and system for virtual interactive multiplayer learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102137006B1 (en) * 2019-11-14 2020-07-23 장봉조 Safety education training system using virtual reality device and method controlling thereof
US11474596B1 (en) * 2020-06-04 2022-10-18 Architecture Technology Corporation Systems and methods for multi-user virtual training
KR102339454B1 (en) * 2020-12-09 2021-12-15 인플랩 주식회사 Serious game system for disaster response education and training
CN112837573A (en) * 2021-01-11 2021-05-25 广东省交通运输高级技工学校 Game teaching platform and method
CN115599471A (en) * 2021-06-28 2023-01-13 中国石油化工股份有限公司(Cn) Game course design system and method based on accident analysis
CN114797101A (en) * 2022-04-11 2022-07-29 平安科技(深圳)有限公司 Publishing method, device, equipment and storage medium of training game resources

Also Published As

Publication number Publication date
CN117750090A (en) 2024-03-22

Similar Documents

Publication Publication Date Title
US6705869B2 (en) Method and system for interactive communication skill training
US11694564B2 (en) Maze training platform
CN104021441B (en) A kind of system and method for making the electronics resume with video and audio
CN104463423A (en) Formative video resume collection method and system
CN109461334A (en) One kind being based on the online audio-video Question Log share system of interconnection architecture and method
CN110516749A (en) Model training method, method for processing video frequency, device, medium and calculating equipment
CN106126524A (en) Information-pushing method and device
CN108597281A (en) A kind of interactive learning system and interaction type learning method
US20190287419A1 (en) Coding training system using drone
CN111417014B (en) Video generation method, system, device and storage medium based on online education
CN111427990A (en) Intelligent examination control system and method assisted by intelligent campus teaching
KR102507260B1 (en) Service server for generating lecturer avatar of metaverse space and mehtod thereof
CN117651960A (en) Interactive avatar training system
CN110609970A (en) User identity identification method and device, storage medium and electronic equipment
CN112382151B (en) Online learning method and device, electronic equipment and storage medium
CN117750090B (en) Immersion type virtual training method and device based on electronic card and storage medium
KR101808631B1 (en) Method of posting poll response and a poll service server providing the method thereof
CN113257060A (en) Question answering solving method, device, equipment and storage medium
CN115311920B (en) VR practical training system, method, device, medium and equipment
KR101562012B1 (en) System and method providing military training mode using smart device
CN112533009B (en) User interaction method, system, storage medium and terminal equipment
WO2022018453A1 (en) Context aware assessment
Bahreini et al. FILTWAM-A framework for online game-based communication skills training-Using webcams and microphones for enhancing learner support
Adu-Gyamfi et al. Reflections on science, technology and innovation on the aspirations of the Sendai framework for disaster risk reduction
JP7402363B2 (en) Disaster prevention training system and disaster prevention training method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant