WO2023241369A1 - Procédé et appareil de réponse à une question, et dispositif électronique - Google Patents

Procédé et appareil de réponse à une question, et dispositif électronique Download PDF

Info

Publication number
WO2023241369A1
WO2023241369A1 PCT/CN2023/097736 CN2023097736W WO2023241369A1 WO 2023241369 A1 WO2023241369 A1 WO 2023241369A1 CN 2023097736 W CN2023097736 W CN 2023097736W WO 2023241369 A1 WO2023241369 A1 WO 2023241369A1
Authority
WO
WIPO (PCT)
Prior art keywords
prop
target
answering
scene
answer
Prior art date
Application number
PCT/CN2023/097736
Other languages
English (en)
Chinese (zh)
Inventor
杨静莲
Original Assignee
北京新唐思创教育科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京新唐思创教育科技有限公司 filed Critical 北京新唐思创教育科技有限公司
Publication of WO2023241369A1 publication Critical patent/WO2023241369A1/fr

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present disclosure relates to the field of Internet teaching technology, and in particular to a question answering method and device and electronic equipment.
  • online education classes have become more diverse, and online education classes often include quizzes based on classroom content.
  • multiple two-dimensional buttons representing candidate answers can be displayed on the classroom screen, and student users can complete the judgment and answer of the test questions by clicking on the two-dimensional buttons; multiple two-dimensional buttons can also be displayed on the classroom screen.
  • a question answering method for use in a virtual education scene.
  • the virtual education scene has at least a virtual character and at least one scene prop.
  • the method includes: assigning at least one answering prop to the virtual character based on the question information,
  • the answer props are used to represent the answer content for the question information.
  • Each answer prop has a matching relationship with at least one scene prop.
  • the answer props and scene props with matching relationships are associated with the same question; in response to the trigger operation for the target answer prop , the target answering prop and the target scene prop are integratedly displayed, the target answering prop is one of at least one answering prop, and the target scene prop is one of at least one scene prop; displays the matching of the target answering prop and the target scene prop determined by the matching relationship As a result, the matching result is used to indicate whether the answer content represented by the target answer prop is correct.
  • a question answering device for use in a virtual education scene.
  • the virtual education scene has at least a virtual character and at least one scene prop.
  • the device includes: a processing module for answering questions for the virtual character based on question information. Allocate at least one answer prop.
  • the answer prop is used to represent the answer content for the question information.
  • Each answer prop has a matching relationship with at least one scene prop.
  • the answer props and scene props with matching relationships are associated with the same question;
  • the display module In response to a triggering operation on the target answering prop, the target answering prop and the target scene prop are integratedly displayed, the target answering prop is one of at least one answering prop, and the target scene prop is one of at least one scene prop;
  • the processing module also Used to display the matching results between the target answering prop and the target scene prop determined by the matching relationship.
  • the matching result is used to indicate whether the answering content represented by the target answering prop is correct.
  • an electronic device including: a processor; and a memory storing a program; wherein the program includes instructions that, when executed by the processor, cause the processor to perform an exemplary implementation according to the present disclosure method described in the example.
  • a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause a computer to perform a method according to an exemplary embodiment of the present disclosure.
  • the answering props assigned to the virtual character and at least one scene prop there is a matching relationship between the answering props assigned to the virtual character and at least one scene prop, and the answering props and scene props with matching relationships are associated with the same question, and the answering props Props are used to represent the answer content based on the question information.
  • the matching result of the target answering prop and the target scene prop determined by the matching relationship can be displayed, and then the target answering prop is determined based on the matching result. Is the content of the answer indicated correct?
  • the method of the exemplary embodiment of the present disclosure can use the characteristic of the answering props to represent the answering content, and assign the answering props to the virtual character, so that the student user can control the virtual character to match the selected target answering prop with the target through human-computer interaction.
  • the integrated display of scene props enhances the sense of immersion and interest in answering questions, allowing student users to truly participate in the test session, increasing enthusiasm and activity in answering questions, enhancing student users' sense of course experience, and thereby improving learning effects.
  • the answer props represent the answer content based on the question information, for different questions, the answer content represented by the answer props assigned to the virtual character may or may not be the same. .
  • FIG. 1 illustrates a schematic diagram of an example system in which various methods described herein may be implemented in accordance with an exemplary embodiment of the present disclosure
  • Figure 2 shows a flow chart of a question answering method according to an exemplary embodiment of the present disclosure
  • Figure 3 shows a schematic diagram of an answering prop representing the answering content according to an exemplary embodiment of the present disclosure
  • Figure 4 shows a schematic diagram of another answering prop representing the answering content according to an exemplary embodiment of the present disclosure
  • Figure 5A shows a schematic diagram of an effective activity area of a virtual education scene according to an exemplary embodiment of the present disclosure
  • Figure 5B shows a schematic diagram of operation prompt information according to an exemplary embodiment of the present disclosure
  • Figure 5C shows a schematic diagram of another operation prompt information according to an exemplary embodiment of the present disclosure
  • Figure 5D shows a schematic diagram of the integrated display of a target answering prop and a target scene prop according to an exemplary embodiment of the present disclosure
  • Figure 6 shows an operation prompt flowchart of an exemplary embodiment of the present disclosure
  • Figure 7 shows a schematic block diagram of a module of a question answering device according to an exemplary embodiment of the present disclosure
  • FIG. 8 shows a schematic block diagram of a chip according to an exemplary embodiment of the present disclosure
  • FIG. 9 shows a structural block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
  • the term “include” and its variations are open-ended, ie, “including but not limited to.”
  • the term “based on” means “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; and the term “some embodiments” means “at least some embodiments”.
  • Relevant definitions of other terms will be given in the description below. It should be noted that concepts such as “first” and “second” mentioned in this disclosure are only used to distinguish different devices, modules or units, and are not used to limit the order of functions performed by these devices, modules or units. Or interdependence.
  • the Metaverse is a virtual world that is linked and created using technological means to map and interact with the real world. It is a digital living space with a new social system.
  • a virtual scene is a virtual scene displayed (or provided) by an application when it is run on the terminal.
  • the virtual scene can be a simulation environment of the real world, a semi-simulation and semi-fictitious virtual environment, or a purely fictitious virtual environment.
  • the virtual scene in the exemplary embodiment of the present disclosure is a three-dimensional virtual scene.
  • Non-Player Character (abbreviated as "NPC") is a type of character in the game. It refers to the game character in the game that is not controlled by the player. It leads the player in the game and is an important core role in the game.
  • the non-player character in the exemplary embodiment of the present disclosure refers to a non-user character.
  • the question consists of two parts: the question stem and the alternatives.
  • the question stem uses declarative sentences or interrogative sentences to create problem-solving situations and ideas.
  • Alternative options refer to candidate options or candidate answers that are directly related to the question stem, and are divided into correct items and interference items.
  • the questions include judgment questions, matching questions and multiple-choice questions.
  • the multiple-choice questions include single-choice questions and indefinite-choice questions (including multiple choice and real choice, real choice means that the question may be either multiple choice or single choice), which is objective. Sex test questions.
  • student users can log in to the application client of the online education classroom and select the required course to start online learning with the teacher.
  • multiple two-dimensional buttons corresponding to multiple candidate answers to the question can be displayed on the display interface.
  • the student user completes the answer operation by clicking the corresponding two-dimensional button, and then the server will The candidate answer corresponding to the two-dimensional button clicked by the student user is compared with the correct answer to the question to determine whether the student user's answer is correct.
  • the online education classroom is a three-dimensional virtual education scene
  • the appearance of two-dimensional buttons has poor integration with the three-dimensional virtual education scene and will interrupt the interaction between the virtual character and the three-dimensional education scene.
  • multiple option areas corresponding to multiple candidate answers to the question can also be displayed on the display interface, and the student user completes the selection by moving the coordinates of the virtual character and moving the virtual character to the corresponding area.
  • this method of completing the answer by moving the coordinates of the virtual character is suitable for three-dimensional virtual education scenes, for true or false questions, "is the area” and “is not the area” are displayed on the display interface, and the student user needs to move the virtual character to the "is area” ” or “not a region”, at this time, the answering efficiency is low.
  • the appearance of a display interface containing multiple option areas and the connection of answering tasks in the three-dimensional virtual scene are very rigid, which is not conducive to the interaction between the virtual character and the three-dimensional virtual education scene.
  • exemplary embodiments of the present disclosure provide a question answering method and apparatus and electronic equipment, so that when conducting a question answering test in a virtual education scene, the student user controls the virtual character to integrate and display the answering props and scene props to complete the questions. Answering, thereby enhancing the sense of substitution and interest of answering questions, allowing student users to truly participate in the answering and testing process, increasing the enthusiasm and activity of answering questions, enhancing the student users' sense of course experience, and then improving the learning effect.
  • users in the exemplary embodiments of the present disclosure may be users who answer questions, and these users may be student users in a narrow sense or student users in a broad sense.
  • a student user in a narrow sense refers to a teaching relationship between a teacher user who posts a question and a student user
  • a student user in a broad sense refers to a student user who needs to answer the user who posts a question.
  • the virtual characters in the exemplary embodiments of the present disclosure may refer to student virtual characters.
  • the question answering method of the exemplary embodiment of the present disclosure can be applied to various virtual education scenes that can be used for learning.
  • This virtual education scene can be a gamified virtual education scene, including but not limited to metaverse-based virtual education scenes, enhanced Real scene or virtual reality scene.
  • FIG. 1 illustrates a schematic diagram of an example system in which various methods described herein may be implemented in accordance with an exemplary embodiment of the present disclosure.
  • a system 100 may include: a first terminal 101 , a second terminal 102 and a server 103 .
  • the first terminal 101 and the second terminal 102 are both installed with clients, and the clients may be different.
  • the first client 1011 can be a teacher client
  • the second client 1021 can be a student client.
  • the teacher client has higher permissions than the student client. It can configure various teaching tasks and manage student clients.
  • the first terminal 101 installs and runs a teacher client that supports teacher users for teaching interaction
  • the first user interface of the teacher client is displayed on the screen of the first terminal 101;
  • the first user interface displays Virtual teaching scenes, role control controls, management interface controls and message input controls.
  • the second terminal 102 is installed and runs a student client that supports student users for teaching interaction.
  • the screen of the second terminal 102 displays the second user interface of the student client;
  • the first user interface displays virtual teaching scenes, role control controls, and Message input control.
  • the virtual education scene may be a virtual scene related to the teaching content, or may be designed according to the teaching content.
  • each virtual object has its own shape and volume in the virtual scene, occupying a part of the space in the virtual scene.
  • the virtual object may be a three-dimensional model, and the three-dimensional model may be a three-dimensional virtual object based on the object attributes represented by it.
  • the same virtual object can show different appearances by wearing different skins.
  • the virtual character in the exemplary embodiment of the present disclosure may be a virtual character participating in teaching interaction in a virtual education scene.
  • the number of virtual characters participating in the teaching interaction can be set in advance, or can be dynamically determined based on the number of clients joining the interaction.
  • the virtual characters may at least include user characters such as student virtual characters, teacher virtual characters controlled through character operation controls, or non-user characters set for interaction in virtual education scenes.
  • character control controls can be used to control the user character, which can include direction control controls and action control controls.
  • direction control control it can control the user character to move in the target direction
  • action control control it can control the user character to display preset actions. For example: jumping, waving, running, nodding, etc. but not limited to this.
  • the icon of the motion control control on the student client is a jumping action icon. When the student user clicks the jumping action icon, the student avatar can show jumping actions.
  • the management interface control can be used to call up the management interface.
  • teacher users can open, close and configure various teaching tasks and view the task execution status of various teaching tasks.
  • the teaching tasks can be various answering tasks, and the task execution status can include the display of answering results and the remaining time to answer the questions.
  • the message input control can be used for users to input interactive messages, and users can communicate with each other through the message input control.
  • Message input controls can include voice input controls, text input controls, etc.
  • the questions can be published in the form of text and/or audio.
  • teacher users can input the question content through the text input control; when the teacher user publishes in the audio mode, the teacher user can input the audio of the question content through the voice input control.
  • teacher users can publish questions in two ways.
  • the first method the teacher user inputs the question content through the text input control, and the teaching tasks will be displayed in the virtual education scene.
  • the server can convert the question content into audio and broadcast the audio to the student client.
  • the server can control non-user characters to play the audio of the question content.
  • the second method Teacher users can input the audio of the question content through the voice input control, and the server broadcasts the audio of the question content to each student client. At the same time, the server can convert the audio of the question content into text and display it in the virtual education scene.
  • the clients installed on the first terminal 101 and the second terminal 102 may be based on the same or different operating system platforms (Android, IOS, Huawei Hongmeng system, etc.). type application.
  • the first terminal 101 may generally refer to one of the plurality of terminals
  • the second terminal 102 may generally refer to another of the plurality of terminals.
  • This embodiment only takes the first terminal 101 and the second terminal 102 as an example.
  • the device types of the first terminal 101 and the second terminal 102 are the same or different, and the device types include at least one of a smart phone, a tablet computer, an e-book reader, a digital player, a laptop computer and a desktop computer.
  • the first terminal 101 and the second terminal 102 can be connected to the server 103 through a wireless network or a wired network.
  • the server 103 includes at least one of a server, a server cluster composed of multiple servers, a cloud computing platform, and a virtualization center.
  • the server 103 is used to provide background services for online teaching interaction.
  • the server undertakes the main calculation work and the terminal undertakes the secondary calculation work; or the server 103 undertakes the secondary calculation work and the terminal undertakes the main calculation work; or the server 103 and the terminal adopt a distributed computing architecture for collaborative computing.
  • the server 103 includes a memory 1031, a processor 1032, user account data 1033, a task service module 1034, and a user-oriented input/output interface (Input/Output Interface, I/ O interface).
  • the processor 1032 is used to load the instructions stored in the server 103 and process the data in the user account database 1033 and the task service module 1034;
  • the user account database 1033 is used to store the user accounts used by the first terminal 101 and the second terminal 102.
  • the task service module 1034 is used to provide multiple virtual education scenes for the user to choose, such as desert scenes and tropical rainforest scenes. Or space teaching scenarios, etc.; the user-oriented I/O interface 1035 is used to establish communication and exchange data with the first terminal 101 and/or the second terminal 102 through a wireless network or a wired network.
  • the question-answering method provided by the exemplary embodiments of the present disclosure can be applied in a virtual education scenario, and the virtual teaching scenario can be selected according to the actual situation.
  • the virtual teaching scene is a fully realistic three-dimensional scene, no special hardware for virtual reality or enhanced display is required, reducing the requirements for equipment hardware.
  • Users participating in question-answering interactions may include teacher users and at least one student user. When a teacher user logs into the teacher client, the teacher user can choose to enable access to a certain virtual education scene on the management interface according to the course schedule. At this time, both teacher users and student users can enter the virtual education scene.
  • the server 103 in the server 103 can search for the account information of the teacher user and the student user from the account database 1033, and display the teacher virtual role in the virtual education scene based on the teacher user's account information, based on the student user's account information.
  • the account information displays student virtual characters in the virtual education scene.
  • the teacher user can control the teacher's virtual character through the character control control of the teacher client, and input the content that the teacher's virtual character wants to express through the message input control.
  • student users can operate the student virtual character through the character control control of the student client, and input the content that the teacher's virtual character wants to express through the message control.
  • the teacher user can use the message input control to guide the students to gradually enter the class state through voice.
  • the virtual education scene is a desert scene
  • the teacher user expresses in voice: "Students who come to the desert can move freely first and observe the environment around us.” Then the student user can control the student virtual character in the desert through the character control control. Free movement within the scene allows student users to gradually enter the classroom through student virtual characters.
  • the question answering method in the exemplary embodiment of the present disclosure is used in a virtual education scene.
  • the virtual education scene has at least a virtual character and at least one scene prop, and can be applied to a terminal or a chip in the terminal.
  • the exemplary implementation of the present disclosure will be described in detail below with reference to the accompanying drawings. Example method.
  • FIG. 2 shows a flowchart of a question answering method according to an exemplary embodiment of the present disclosure.
  • the question answering method according to the exemplary embodiment of the present disclosure includes:
  • Step 201 Assign at least one answering prop to the virtual character based on the question information.
  • the answer props are used to represent the answer content for the question information.
  • the answer content may be the answer content for the question information released by the teacher user.
  • Each answer prop has a matching relationship with at least one scene prop, and the answer props and scene props with matching relationships are associated with the same question.
  • Exemplary embodiments of the present disclosure can enable the teacher user to configure the answering prop information in advance according to the question information in the virtual education scene management interface of the teacher client when the teacher user activates the question answering interactive skill on the teacher client.
  • Answer prop information may include front-end configuration information and back-end configuration information.
  • the front-end configuration information may include the rendering information of the answering props
  • the background configuration information may include the matching relationship between the answering props and the scene props of the same question.
  • the answering content represented by the answering props it can exist as part of the rendering information of the answering props, or it can be directly instructed by the teacher user in the form of voice or text.
  • the server before allocating at least one answering prop to the virtual character based on the question information, when the server receives the answering prop information from the terminal, it can generate at least one answering prop in the virtual education scene based on the rendering information of the answering prop. At this time, the virtual education scene can display at least one answering prop.
  • the rendering information of the answering props may include basic rendering parameters and indication information rendering parameters.
  • the server can render at least one answering prop in the virtual education scene based on basic rendering parameters.
  • the basic rendering parameters include initial position parameters, display parameters, identification parameters and quantity parameters.
  • the display parameters may be parameters related to the display effect of each answering prop in the virtual education scene, such as shape, color, size, etc.
  • Identification parameters can be used to distinguish between different Identity parameters set for answering questions.
  • the answering props can be flag props, small red flower props, five-pointed star props, circle props, etc., but are not limited to these.
  • a “flag prop” is used as an answering prop for an example description.
  • the server can display a corresponding number of answering props at the initial position corresponding to the initial position parameter in the virtual education scene based on the initial position parameter, display parameter and quantity parameter.
  • Each answering prop has a unique identity. Therefore, the server can display the corresponding number of answering props based on the initial position parameter.
  • the identity identifier establishes a matching relationship between each answer prop and at least one scene prop.
  • the server can also configure the identity of each scene prop in the background configuration information of the system. Therefore, a match between each answering prop and at least one scene prop can be established based on the identity of the answering prop and the identity of the scene prop. relation.
  • the server can also render instruction information for each answering prop within the virtual education scene based on the instruction information rendering parameters.
  • the answering prop displays instruction information, and the instruction information is used to indicate the content of the answer.
  • the answer props display different instruction information, and different instruction information indicates different answer content. Student users can answer according to the instruction information displayed on the answer props.
  • answer props can represent the answer content in a static visualization way, or they can also express the answer content in a dynamic visualization way.
  • the answer content represented by the answer prop is fixed at any time.
  • the answering content represented by the answering prop may be different at different times or different periods.
  • each answer prop can indirectly or directly represent an answer content.
  • the answering prop indirectly represents the answering content
  • the user cannot directly know the answering content represented by the answering prop from the answering prop.
  • the answering content represented by the answering prop can be informed through voice information or text information.
  • the answering props assigned to users belonging to the same group represent the same answering content
  • the answering props assigned to users belonging to different groups represent different answering contents. This can distinguish the answering situations of users in different groups.
  • it can also realize the sharing of answer information between users in the same group and the confidentiality of answer information between users in different groups.
  • the answer content represented by the answer props assigned to different users belonging to the same group may also be different, so that the confidentiality of answer information between users can also be achieved.
  • the answer content represented by the answer props can be played in the form of text, images or even audio.
  • the user can directly know the answering content represented by the answering prop from the answering prop. After receiving the answering prop, he can answer according to the answering content represented by the answering prop. Therefore, the answering steps can be reduced and the answering efficiency can be improved.
  • each team member can answer together according to the answering content directly represented by the answering props assigned to them, which can achieve teamwork among team members and improve team cohesion.
  • the answer props assigned to the virtual character can represent one of the answer contents in a static visualization manner, or can represent multiple answer contents in a dynamic visualization manner.
  • each answer prop represents a unique answer content, and the number of answer props for the same question can be multiple.
  • different answer props for the same question represent different answer contents in a static visualization manner, and the matching relationship is a matching relationship between the answer content represented by the answer props in a static visualization manner and at least one scene prop.
  • Figure 3 shows a schematic diagram of an answering prop representing the answer content according to an exemplary embodiment of the present disclosure.
  • four flags are assigned to the virtual character for the same question, and each flag is statically visualized.
  • the method indicates the content of the answer.
  • the answer content represented by flag 301 is "A”
  • the answer content represented by flag 302 is "B”
  • the answer content represented by flag 303 is "C”
  • the answer content represented by flag 304 is "D”.
  • "A”, “B”, “C” and “D” each have a matching relationship with at least one scene prop.
  • the number of answer props for the same question can be one or multiple.
  • the answer prop is the target answer prop, and the answer content can be the preset answer content.
  • the target answer prop has multiple presentation periods, and the target answer prop dynamically and visually represents multiple different preset answer contents in each presentation period, matching The relationship is the matching relationship between each preset answer content and at least one scene prop.
  • the preset answer content can be visually displayed through different logos, which can be one of color logos, shape logos, image logos, text logos, symbol logos, etc. There is a matching relationship between the preset answer content corresponding to the same logo and at least one scene prop.
  • the preset display order of multiple preset answer contents can also be set on the management interface, and the multiple preset answer contents can be visually and dynamically displayed according to the preset display order.
  • the presentation period and the number of presentation periods of the target answering props can be set on the management interface according to the actual situation. Therefore, the number of multiple preset answering contents for the same presentation period can be determined based on the presentation period and the number of preset answering contents. Display time interval. At this time, the answering props can periodically and dynamically display multiple preset answering contents according to the display time interval and the preset display order. At the same time, the preset answering time can also be determined based on the presentation period and the number of presentation cycles, and the preset answering time can be used for the countdown of the student user's answering process. It should be understood that the presentation period is the product of the display time interval and the number of preset answer contents.
  • Figure 4 shows a schematic diagram of another answering prop representing the answering content according to an exemplary embodiment of the present disclosure.
  • a flag 401 is assigned to the virtual character for the same question, that is, a target flag.
  • the target flag can be dynamically displayed.
  • Visual representation of the four pre- Set the answer content (such as "A”, “B”, “C” and “D”).
  • For the flag 401 that can represent four preset answer contents when it visually represents "A”, it is called the flag 401-A; when it visually represents "B”, it is called the flag 401-B. ; When it visually represents "C”, it is called the flag 401-C; when it visually represents "D”, it is called the flag 401-D.
  • flag 401-A, flag 401-B, flag 401-C and flag 401-D are the same flag props, but their content is different.
  • the display period can be set to 12 seconds, the number of display periods is 5, and the preset display order is "A”, “B”, “C” and “D”.
  • "A”, “B”, “C” and “D” are periodically dynamically displayed on flag 1 at intervals of 3 seconds in accordance with the preset display order.
  • "A", "B”, “C” and “D” represented by flag 401-A, flag 401-B, flag 401-C and flag 401-D respectively are in the flag.
  • the visual periodic dynamic display on 401 is based on the preset answer content displayed on the flag 401.
  • any one of "A”, "B”, “C” and “D” represented by flag 401-A, flag 401-B, flag 401-C and flag 401-D respectively matches at least one scene prop. relation.
  • the exemplary embodiments of the present disclosure dynamically and visually represent multiple different preset answer contents for the target answer props in each presentation period, and there is a matching relationship between any one of the preset answer contents and at least one scene prop.
  • the target answer props represent different preset answer contents at different periods within a display cycle. Therefore, the target answering props are reusable in the same question, so the number of configurations of answering props in the virtual education scene can be reduced in the same question, thereby improving the system's running speed and screen fluency, as well as the operational sensitivity of student users. This will enhance students’ user experience in the course.
  • the above matching relationship is configured when the teacher client management interface configures the answering prop information.
  • the exemplary embodiment of the present disclosure configures the answering props and scene props of the same question in the background configuration information of the answering prop information. matching relationship.
  • the matching relationship can be a correct matching relationship or an incorrect matching relationship.
  • the answering props can represent the answering content in a static visualization manner, or can dynamically represent multiple different preset answering contents in each display period.
  • the following answer props use static visualization to represent the answer content as an example.
  • Table 1 shows a statistical table of matching relationships between answer props and scene props according to an exemplary embodiment of the present disclosure.
  • Table 1 shows a statistical table of matching relationships between answer props and scene props according to an exemplary embodiment of the present disclosure.
  • Table 1 shows a statistical table of matching relationships between answer props and scene props according to an exemplary embodiment of the present disclosure.
  • Table 1 shows a statistical table of matching relationships between answer props and scene props according to an exemplary embodiment of the present disclosure.
  • Table 1 shows a statistical table of matching relationships between answer props and scene props according to an exemplary embodiment of the present disclosure.
  • Table 1 shows a statistical table of matching relationships between answer props and scene props according to an exemplary embodiment of the present disclosure.
  • Table 1 shows a statistical table of matching relationships between answer props and scene props according to an exemplary embodiment of the present disclosure.
  • Table 1 shows a statistical table of matching relationships between answer props and scene props according to an exemplary embodiment of the present disclosure.
  • Table 1 shows a statistical
  • the matching relationship with scene prop C in question 1 is a correct matching relationship
  • the matching relationship with scene prop C in question 2 is an incorrect matching relationship.
  • the flag 1 can be configured in different questions, and the matching relationship between the answer props and the scene props in different questions may be different. Therefore, the answering props assigned to the student role in the method of the exemplary embodiment of the present disclosure are reusable in different questions in the answering test session, and can be used as plug-ins to be compatible with different virtual education scenes and different types of questions, and reduce the hardware configuration requirements, so that the methods of the exemplary embodiments of the present disclosure can be adapted to some common models.
  • the exemplary embodiment of the present disclosure configures one answer prop (flag 1) and four scene props (scene prop A, scene prop B, scene prop C and scene prop) for question 1 on the management interface. D). At this time, the number of answer props and the number of scene props associated with the same question are different.
  • the terminal when the teacher user configures the information about the answering props on the management interface, the terminal sends the information about the answering props to the server.
  • the server can determine the initial position of the answering props in the virtual education scene based on the initial position parameters of the answering props, and initially The position renders a display effect corresponding to the corresponding display parameters and a corresponding number of answering props, and each answering prop has a corresponding identity mark. It should be understood that the server can also save the matching relationship between the answering props and scene props belonging to the same question to provide a basis for subsequent determination of the answering results.
  • the student user After at least one answering prop is displayed at the initial position of the answering prop in the virtual education scene, the student user is required to control the virtual character to move to the initial position of the answering prop to receive the answering prop. Then, the server configures at least one answering prop for the virtual character in response to the receiving operation for the answering prop.
  • exemplary embodiments of the present disclosure can use route guidance to guide the student user to control the virtual character. Find the target prop, which can be an answering prop.
  • exemplary embodiments of the present disclosure may use a route guidance method to guide the virtual character to move to the answering prop before merging and displaying the target answering prop and the target scene prop in response to a triggering operation on the target answering prop.
  • the route guidance method may be at least one of a visual guidance method, an air wall guidance method, and an audio guidance method, but is not limited to this.
  • the visual guidance method can be at least one of direction sign instructions, landmark instructions, arrow instructions, text prompts, etc., but is not limited to this.
  • the direction sign indication can be a sign showing the direction and distance of the location of different scene props that appears at intervals
  • the landmark indication can be the scene prop identification information on the small map of the virtual education scene interface
  • the arrow indication can be every interval Appearing at a distance, it can also lead the virtual character forward without interruption
  • text prompts can be text information input by the teacher on the management interface, or text information provided by non-user characters in the virtual education scene.
  • the air wall guidance method When the air wall guidance method is used to guide the virtual character to move to the target prop, the air wall can be used to separate the space as an invisible wall, so that the user can see certain virtual education scenes but cannot control the passage of the virtual character. Therefore, through the air wall guidance method Guide the virtual character to move to a passable area.
  • the target prop plays the audio.
  • the audio can be configured in advance by the teacher user on the management interface.
  • the audio can be poetry, music, etc., but is not limited to this.
  • the exemplary embodiment of the present disclosure uses a route guidance method to guide the virtual character to receive the answering props at the initial position of the answering props, effectively avoiding the invalid exploration of the student users in the virtual education scene and wasting the answering time, and improving the enthusiasm of the student users to answer the questions, thereby improving the enthusiasm of the student users to answer the questions. Improved question-answering efficiency for student users.
  • At least one answering prop is assigned to the virtual character based on the question information.
  • the terminal and/or the server can bind each answering prop and the virtual character to obtain the binding relationship between each answering prop and the virtual character.
  • the binding relationship may be the binding relationship between the identity of each answering prop and the identity of the virtual character.
  • the terminal when answering questions on a single machine, the terminal performs the above binding operation to obtain the binding relationship between each answering prop and the virtual character. At this time, the server does not need to perform the above binding operation.
  • the terminal When answering questions online, the terminal does not need to perform the above binding operation, but only sends a message of successful allocation of answering props to the virtual character to the server. After receiving the message of successful allocation, the server binds each answering prop to the virtual character and obtains each The binding relationship between an answering prop and a virtual character.
  • the number of answering props may be determined based on the question type. For example: For a multiple-choice question that contains scene props corresponding to four options, you can assign an answer prop to the virtual character to represent the student user's answer; you can also assign four answer props to the virtual character to represent the four options respectively. The corresponding answer content is used by the student user to select one as the student user's answer. Another example: For a four-on-four connecting question, which includes four connecting items and four connected items, four answering props can be assigned to the virtual character, which respectively represent the corresponding four connected items. The content of the answer. . At this time, the answering props can represent the answering content in a static visualization manner, or can dynamically represent multiple different preset answering contents in each display period. The following answer props use static visualization to represent the answer content as an example.
  • Table 2 shows a statistical table of binding relationships between answering props and virtual characters according to an exemplary embodiment of the present disclosure. As shown in Table 2, for question 1, when assigning flag 1 to A, the statistical table of the binding relationship between flag 1 and A is obtained. In response to the allocation operation for flag 1, flag 1 is assigned to A, and at the same time, flag 1 and A are bound, and the binding relationship between flag 1 and A is obtained as flag 1-A.
  • Exemplary embodiments of the present disclosure allocate at least one answering prop to a virtual character and obtain the virtual binding relationship between the answering prop and the virtual character. Therefore, the virtual character to which the answering prop belongs can be determined based on the identity of the answering prop and the binding relationship. When determining the answer results later The determination efficiency can be improved.
  • At least one answering prop and the virtual character are displayed in fusion.
  • the integrated display here may refer to displaying the virtual character in a reasonable manner based on the posture of the virtual character and the attributes of the answering props.
  • a backpack for storing answering props can be configured for the virtual character.
  • the backpack can be configured on the virtual character, or can be configured in an operation bar of the operation interface.
  • the system assigns at least one answering prop to the virtual character, and fuses the answering props with the virtual character's backpack. After the fusion display, the answering props are visually displayed in the backpack, and the student user can intuitively see that the virtual character has After receiving the answering props, you can start exploring and looking for target scene props in the virtual education scene.
  • the answering prop can also be hidden. This method of hiding the answering prop can simplify the picture in the virtual scene.
  • the virtual character can be displayed in response to certain trigger operations of the virtual character. For example: in response to the long press operation on the virtual character, hidden answering props are displayed, making it convenient for student users to view the answering props and select the target answering props for answering. At the same time, it enhances the interest of the answering test session and improves student users' search. The positivity of the answer.
  • the student user can control the virtual character to find scene props in the virtual education scene.
  • a route guidance method can be used to guide the virtual character to move to the target props, and the target props can be the target scene props.
  • route guidance methods please refer to the relevant route guidance methods of the answering props.
  • exemplary embodiments of the present disclosure are aimed at Different questions open corresponding activity areas for virtual characters in the virtual education scene to improve the efficiency of student users in answering questions.
  • an effective activity area that the virtual character can explore is opened on the management interface for the question information.
  • the effective activity area can be divided by air walls.
  • the air wall can be an invisible wall to separate the space, so that users can see some virtual education scenes but cannot control the passage of virtual characters.
  • the effective activity area contains scene props corresponding to the question information.
  • the virtual character can explore the effective activity area, find the scene props corresponding to the target answering props, and finally complete the answer to the question, thus improving the answering efficiency of student users.
  • FIG. 5A shows a schematic diagram of an effective activity area of a virtual education scene according to an exemplary embodiment of the present disclosure.
  • scene props A, scene props B, scene props C and scene props D are all located in the effective activity area separated by the air wall.
  • the upper left corner of the display interface is the effective activity area.
  • student users can control A 502, B 503 and C 504 respectively to explore within the effective activity area based on the location information of the scene props on the mini map 501, and find the target scene props according to the answer content represented by flag 1, thereby avoiding Ineffective exploration is caused by the large virtual education scene area, thereby improving the efficiency of student users in answering questions.
  • Step 202 In response to the triggering operation on the target answering prop, the target answering prop and the target scene prop are integratedly displayed.
  • the target answering prop is one of at least one answering prop
  • the target scene prop is one of at least one scene prop.
  • exemplary embodiments of the present disclosure display multiple answering props in a superimposed manner.
  • the superimposed display can be a completely superimposed display and a Partially overlapped display.
  • the answer content represented by multiple answer props is visually displayed, thereby making it convenient for student users to select the target answer prop from multiple answer props to answer.
  • the target answering prop in the exemplary embodiment of the present disclosure may be any answering prop selected by the student user from multiple answering props assigned to the virtual character, and the target scene prop may be the location of any scene prop that the student user controls the virtual character to reach. That is to say, the target answering props and target scene props in the exemplary embodiment of the present disclosure do not have special meanings, and are only used to distinguish them from the answering props and scene props that have not been selected by the student user.
  • the trigger operation may be an operation in which the student user inputs a trigger instruction to the terminal, which may be one or more of the student user's click, check, touch, long press, etc. operations on the answering item, but is not limited to this.
  • FIG. 6 shows an operation prompt flowchart of an exemplary embodiment of the present disclosure.
  • the method of the exemplary embodiment of the present disclosure allocates at least one answering prop to the virtual character based on the question information, in response to the triggering operation for the target answering prop, before the target answering prop and the target scene prop are merged and displayed, the method Methods of disclosing exemplary embodiments may also include:
  • Step 601 When it is determined that the virtual character is close to the scene props, display the operation prompt information in the virtual education scene.
  • the preset distance threshold between the position of the virtual character and the position of the scene props can be set in advance.
  • the student user controls the virtual character to move in the effective activity area.
  • the virtual character moves close to the scene props, at this time, the virtual If the distance between the character's position and the scene prop's position is less than or equal to the preset distance threshold, the operation prompt information is displayed in the virtual education scene.
  • the preset distance threshold can be set according to the actual situation.
  • the operation prompt information can be displayed in the virtual education scene in a visual manner, and the visualization method can be one or more of picture special effects, audio playback, text prompts, etc.
  • the picture special effects can be static special effects and dynamic special effects.
  • the static special effects can be lighting
  • the dynamic special effects can be one or more of flash, shaking, vibration, etc., but are not limited to this.
  • Audio playback can be playing music, poetry, etc.
  • FIG. 5B shows a schematic diagram of operation prompt information according to an exemplary embodiment of the present disclosure.
  • a 502, B 503 and C 504 explore the effective activity area in the virtual education scene, and set the preset distance between the position of the virtual character and the position of each scene prop in advance in the management interface.
  • the threshold is d.
  • scene prop C runs a luminous special effect to remind the student user that this scene prop may be the correct answer.
  • FIG. 5C shows a schematic diagram of another operation prompt information according to an exemplary embodiment of the present disclosure.
  • a text prompt window pops up above the scene prop C to remind the student user that the scene prop may be the correct answer.
  • the operation prompt information is displayed in the virtual education scene, so that the student user can answer according to the operation prompt information and avoids the student user aimlessly searching for the target scene. Use props to waste time and improve the efficiency of answering questions.
  • Step 602 Determine that the virtual character sends a confirmation message in response to the operation prompt information.
  • the confirmation message is used to instruct the student user to accept the operation invitation for the target answer item.
  • the virtual character can send a confirmation message or a give-up message in response to the operation prompt information.
  • the virtual character continues to explore the effective activity area to find the target scene props.
  • the student user clicks on the scene prop C with luminous effects it means that the student user accepts the operation invitation for the scene prop C and sends a confirmation message to the terminal. After receiving the confirmation message, the terminal reports it to the server. If the student user does not click on the scene prop C with luminous special effects and leaves the scene prop C with luminous special effects, it means that the student user has given up the invitation to operate the scene prop C, and the virtual character will continue to explore the effective activity area and continue to find answers.
  • the target answering prop and the target scene prop are integratedly displayed.
  • the integrated display of the target answering prop and the target scene prop can indicate that the virtual character has completed the selection of the target scene prop.
  • Figure 5D shows a schematic diagram of the integrated display of a target answering prop and a target scene prop according to an exemplary embodiment of the present disclosure.
  • student user control A 502 clicks " in the text prompt window shown in Figure 5C OK" button, the server performs the triggering operation for flag 1 of A 502, and displays flag 1 and scene prop C in a fusion.
  • exemplary embodiments of the present disclosure bind the target answering prop and the target scene prop in response to the triggering operation on the target answering prop, and obtain the binding relationship between the target answering prop and the target scene prop.
  • the target answer props can represent the answer content in a static visualization manner, or can dynamically represent multiple different preset answer contents in each presentation period.
  • each answering prop and scene prop has an identity, and the binding relationship here may be a binding relationship between the identity of the target answering prop and the identity of the target scene prop.
  • the exemplary embodiment of the present disclosure can also combine the binding relationship between the answering props and the virtual character in Table 2 to obtain the binding relationship between the virtual character, the target answering prop and the target scene prop.
  • the following answer props use static visualization to represent the answer content as an example.
  • Table 3 shows a statistical table of binding relationships between target answering props and target scene props in an exemplary embodiment of the present disclosure.
  • the exemplary embodiment of the present disclosure only has flag 1 for question 1, and flag 1 is the target.
  • Flag, scene prop A, scene prop B, scene prop C and scene prop D may all be the target scene props selected by A.
  • flag 1 and one of the selected scenes will be Bind the props to obtain the binding relationship between flag 1 and the scene props.
  • the exemplary embodiment of the present disclosure can combine Table 2 and Table 3 to obtain the binding relationship between A, flag 1 and the selected scene prop. .
  • the exemplary embodiments of the present disclosure can determine the virtual character to which the target answering prop and the target scene prop belongs based on the binding relationship between the virtual character, the target answering prop and the target scene prop when subsequently determining the student user's answering result.
  • the efficiency of determination can be improved when answering questions.
  • each answering prop has a matching relationship with multiple scene props.
  • the triggering operation information of the target answering prop is marked, and the triggering operation information includes the number of triggering operations. and trigger execution status.
  • the trigger execution status is used to indicate whether the target answering prop and the target scene channel are fused and displayed.
  • the answering props can represent the answering content in a static visualization manner, or can dynamically represent multiple different preset answering contents in each display period.
  • the following answer props are available in static form Visually represent the answer content and provide examples.
  • Table 4 A flag-planting operation information mark table
  • Table 4 shows a flagging operation information tag table according to an exemplary embodiment of the present disclosure.
  • flag 1 represents the student user's choice in a static visualization manner.
  • the terminal reports this flag-planting operation to the server.
  • the server marks the flag-planting operation information mark table and then feeds it back to the terminal, and displays the flag-planting operation information mark table after the mark is displayed on the terminal.
  • exemplary embodiments of the present disclosure can determine the student user's question answering progress based on the number of flag plantings and flag planting execution status, and can also determine whether there is a fusion display for triggering operations in the virtual education scene to be displayed in the interface based on the flag planting execution status. . Therefore, the exemplary embodiments of the present disclosure can count the student's answering progress through the triggering operation information of the target answering props, and at the same time, display whether there is a fusion display for the triggering operation in the virtual education scene interface, reflecting the student user's answering progress in an intuitive form. .
  • the answering props can represent the answering content in a static visualization manner, or can dynamically represent multiple different preset answering contents in each display period.
  • the method of the exemplary embodiment of the present disclosure may further include:
  • the triggering operation information of the target answering prop is marked.
  • the triggering operation information includes the number of triggering operations and the triggering execution status for the same answering content.
  • the triggering execution status is used to indicate the target answering prop and the target scene channel. Whether to perform fusion display.
  • the student's answering progress is counted in the background.
  • whether there is a fusion display for the triggering operation is displayed in the virtual education scene interface to reflect the student user's answering progress in an intuitive form. .
  • the following answer props use static visualization to represent the answer content as an example.
  • Table 5 shows another flagging operation information tag table according to an exemplary embodiment of the present disclosure.
  • Table 5 there are 4 true-false questions in the virtual education scene, with a total of eight "A”, “B”, “C”, “D”, “E”, “F”, “G” and “H”
  • Flag 1 is assigned to A as the answer prop.
  • flag 1 is the target flag.
  • Student users explore and find the target scene props corresponding to the four true-false questions in the virtual education scene. For each target scene prop, they determine to perform a flag-planting operation on the target flag prop. After a flag-planting operation on the target flag prop, The number of flag plantings and flag planting execution status are marked in the flag planting operation information mark table.
  • each answering prop represents the same answering content in different questions, and the answering props are reusable in different questions. Therefore, the configuration of answering props in the virtual education scene can be reduced in different questions. Quantity, thereby improving the system's running speed and screen fluency, as well as the student user's operational sensitivity, thereby enhancing the student user's course experience.
  • the exemplary embodiment of the present disclosure responds to the target answer prop after allocating at least one answer prop to the virtual character based on the question information.
  • the method according to the exemplary embodiment of the present disclosure may further include:
  • a derived answer prop that statically represents the target answer content is reproduced based on the target answer prop.
  • the derived answer prop is a target answer prop that statically represents the target answer content, and the target answer content is a plurality of presets. One of the content of the answer.
  • the target answer prop represents different preset answer contents at different periods within a presentation cycle
  • a derived answer prop is copied from the target answer prop that represents the target answer content in a static manner.
  • the derived answer item represents the content of the target answer question
  • the derived answer item is used as the final target answer item for answering.
  • the number of triggers of the target answering prop can be set based on the question type. After the terminal receives the triggering operation of the target answering prop from the student user, the triggering operation is reported to The server counts the number of triggering operations on the target answering prop to determine the student user's answering progress based on the number of triggering operations on the target answering prop.
  • the teacher can set the preset triggering times of the target answering props in advance on the teacher client management interface according to the question type.
  • the preset triggering times can be used to limit the number of triggering operations of the target answering props in the same question, and also Can be used to limit the number of triggering operations of the target answering prop on multiple different questions of the same type.
  • the teacher can also set the preset triggering times for limiting the target answering prop in the same question and the preset triggering times for limiting the target answering prop on multiple different questions of the same type at the same time on the management interface.
  • the preset number of triggering operations is used to limit the number of triggering operations of the target answering prop in the same question
  • the student user when the number of triggering operations of the student user for the question is greater than or equal to the preset number of triggering operations, the student user completes the answer to the question.
  • the terminal automatically switches to the virtual education scene for the next topic.
  • the preset number of triggering operations is used to limit the number of triggering operations of the target answering prop on multiple different questions of the same type, when the number of triggering operations of the student user on the multiple different questions is greater than or equal to the preset number of triggering operations, the student will The user completes this answer.
  • the answering prop when the answering prop can represent the answering content in a static visualization manner, and it is determined that the number of triggering operations of the target answering prop is less than the preset triggering number, the target answering prop is in a triggering allowed state; it is determined that the target answering prop When the number of trigger operations is greater than or equal to the preset number of triggers, the target answering prop is in a trigger-prohibited state.
  • the terminal automatically switches to the virtual education scene corresponding to the next judgment question so that the student user can continue to answer.
  • the target answering prop for any true-false question is greater than or equal to 1
  • the target answering prop for this true or false question is in a trigger-prohibited state.
  • the total number of triggering operations of the target answering item is greater than or equal to 5
  • the exemplary embodiments of the present disclosure can determine the student's answering progress based on the number of triggering operations of the target answering prop, and the target answering prop in the trigger-prohibited state can remind the student that the user has completed the answering operation.
  • the target answer The prop is in the allowed triggering state. Every time the student user completes the triggering operation of the target answering prop for the question, the terminal automatically switches to the virtual education scene corresponding to the next question so that the student user can continue to answer.
  • the answer prop dynamically and visually represents multiple different preset answer contents in each presentation period, it is determined that the number of reproductions of the derived answer props representing the same target answer content is less than the preset number of reproductions.
  • the target answer prop represents the target answer content
  • the target answer prop is in the allowed triggering state, and the number of scene props corresponding to each preset answer content is equal to the preset number of reproductions; determine the derived answer questions that represent the same target answer content. If the number of times a prop is reproduced is greater than or equal to the preset number of times, and the target answer prop represents the content of the target answer, the target answer prop is in a trigger-prohibited state.
  • the number of scene props corresponding to each preset answer content is known. That is to say, when the preset answer content is the target answer content, the number of reproduction times of the derived answer props corresponding to the target answer content is It is known that, therefore, based on the number of scene props corresponding to each preset answer content, the preset number of reproduction times of the derived answer props corresponding to the preset answer content can be determined.
  • teachers can set the preset number of re-enactments of derived answer props in advance based on the question information.
  • student users can re-reproduce the derived answer props based on the target answer content represented by the target answer props. to answer. If the number of times the derivative answer prop represents the target answer content is less than the preset number of times, when the target answer prop dynamically and visually represents the target answer content in each presentation cycle, the target answer prop is in a allowed triggering state, and in response to the target answer
  • the triggering operation of the answer item is to copy the derived answer item that represents the content of the target answer item as a new target answer item for answering.
  • the target answering prop is in a trigger-prohibited state. Continue to apply triggering operations on the target answering prop. The target answering prop will not be reproduced as a derivative answering prop for answering.
  • the function of the target answer item to visually display the target answer content is deleted. It should be understood that when the number of re-enactments of derived answer props representing the same target answer content is equal to the preset number of re-enactments, the answer to the target answer content has been completed, and in the subsequent answering process, other than the target answer content will be Other preset answer contents are used as candidates, and the function of the target answer props to visually represent the target answer content can be deleted to prevent student users from accidentally touching it and wasting answering time.
  • the target answering content is visually marked.
  • the visual mark method can be: color mark, gray mark, pattern mark, symbol mark, etc., but is not limited to this. It should be understood that when the number of re-enactments of derived answer props representing the same target answer content is equal to the preset number of re-enactments, the answer to the target answer content has been completed, and in the subsequent answering process, other than the target answer content will be Other preset answer content can be used as candidates.
  • the target answer prop represents the target answer content, it can be visually marked to remind students and users to avoid accidentally touching and wasting answer time, and to improve answer efficiency.
  • Step 203 Display the matching result of the target answering prop and the target scene prop determined by the matching relationship.
  • the matching result is used to indicate whether the answer content represented by the target answer prop is correct.
  • the answering prop information configured by the teacher user according to the question information in the teacher client management interface includes background configuration information, and the background configuration information may include answers to the same question.
  • the matching relationship between props and scene props. Therefore, exemplary embodiments of the present disclosure can determine the matching results of the target answering props and the target scene props based on the matching relationship between the answering props and the scene props of the same question in the background configuration information, and display the matching results in the virtual education scene interface. , thereby determining whether the student user's answer result is correct based on the matching result.
  • Table 6 shows a matching result statistical table according to an exemplary embodiment of the present disclosure.
  • the target answering prop is flag 1.
  • the target scene props are scene prop A, scene prop B and scene prop D respectively
  • the matching results of flag 1 with scene prop A, scene prop B and scene prop D respectively are errors. Students The user's answer result is wrong. At this time, the answer content indicated by flag 1 is wrong.
  • the target scene props are scene prop C
  • the matching result between flag 1 and scene prop C is correct, and the student user's answer result is correct. At this time, the answer content represented by flag 1 is correct.
  • Table 3 of the exemplary embodiment of the present disclosure obtains the binding relationship statistical table of the answering props and the target scene props. Therefore, the exemplary embodiment of the present disclosure can also display the target determined by the matching relationship and the binding relationship. The matching result between the answering props and the target scene props. At this time, it can be determined whether the student user's answer result is correct based on the matching relationship between the answering props and the scene props and the binding relationship between the target answering props and the target scene props.
  • Table 7 shows another matching result statistical table according to an exemplary embodiment of the present disclosure.
  • question 1 combined with the binding relationship between the target answering prop and the target scene prop in Table 3, as well as the binding relationship between the virtual character, the target answering prop and the target scene prop, the target answering prop and the target scene prop are determined
  • the matching result as well as the virtual character and student user who determine the matching result.
  • the matching results of flag 1 with scene prop A, scene prop B and scene prop D respectively are errors.
  • the answer result of the student user A corresponding to the matching result has a binding relationship is wrong. At this time, the answer content represented by flag 1 is wrong.
  • the target scene props are scene props C
  • the matching result between flag 1 and scene prop C is correct
  • the answer result of student user A corresponding to the matching result is correct.
  • the answer content represented by flag 1 correct.
  • any student user can be determined
  • the answering situation of students can be quickly counted in the classroom, and the answering progress and results of student users can be understood based on the answering situation, so as to facilitate teachers to analyze students' learning efficiency.
  • the answering props assigned to the virtual character and at least one scene prop there is a matching relationship between the answering props assigned to the virtual character and at least one scene prop, and the answering props and scene props with matching relationships are associated with the same question, and the answering props Props are used to represent the answer content based on the question information.
  • the matching result of the target answering prop and the target scene prop determined by the matching relationship can be displayed, and then the target answering prop is determined based on the matching result. Is the content of the answer indicated correct?
  • the method of the exemplary embodiment of the present disclosure can use the characteristic of the answering props to represent the answering content, and assign the answering props to the virtual character, so that the student user can control the virtual character to match the selected target answering prop with the target through human-computer interaction.
  • the integrated display of scene props enhances the sense of immersion and interest in answering questions, allowing student users to truly participate in the test session, increasing enthusiasm and activity in answering questions, enhancing student users' sense of course experience, and thereby improving learning effects.
  • the answer props represent the answer content based on the question information, for different questions, the answer content represented by the answer props assigned to the virtual character may or may not be the same. .
  • the terminal includes hardware structures and/or software modules corresponding to each function.
  • the present disclosure can be implemented in the form of hardware or a combination of hardware and computer software with the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving the hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each specific application, but such implementations should not be considered to be beyond the scope of this disclosure.
  • Embodiments of the present disclosure can divide the terminal into functional units according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or software function modules. It should be noted that the division of modules in the embodiment of the present disclosure is schematic and is only a logical function division. In actual implementation, there may be other division methods.
  • FIG. 7 shows a module schematic block diagram of a question answering device according to an exemplary embodiment of the present disclosure.
  • the answering device 700 is used in a virtual education scene.
  • the virtual education scene has at least a virtual character and at least one scene prop.
  • the device 700 includes:
  • the processing module 701 is configured to allocate at least one answer prop to the virtual character based on the question information.
  • the answer prop is used to represent the answer content for the question information.
  • Each of the answer props and at least one of the scene props There is a matching relationship between them, and the answering props and the scene props having a matching relationship are associated with the same question;
  • the display module 702 is configured to fuse and display the target answering prop and the target scene prop in response to a triggering operation on the target answering prop, where the target answering prop is one of the at least one answering prop, and the target scene prop is One of the at least one scene prop;
  • the processing module 701 is also configured to display a matching result between the target answering prop and the target scene prop determined by the matching relationship, and the matching result is used to indicate whether the answering content represented by the target answering prop is correct.
  • the processing module 701 is also configured to, after allocating at least one answering prop to the virtual character based on the question information, bind the target in response to a triggering operation for the target answering prop. Answer props and target scene props, and obtain the binding relationship between the target answer props and the target scene props.
  • the processing module 701 is also configured to display the matching result of the target answering prop and the target scene prop determined by the matching relationship and the binding relationship.
  • the processing module 701 is also used to determine that the number of trigger operations of the target answering prop is small. In the case of a preset number of triggers, the target answering prop is in a trigger-allowed state; when it is determined that the number of triggering operations of the target answering prop is greater than or equal to the preset number of triggers, the target answering prop is in a prohibited triggering state. state.
  • the processing module 701 is also configured to assign at least one answering prop to the virtual character based on the question information.
  • the triggering operation information of the target answering prop is marked, the triggering operation information includes the number of triggering operations and the triggering execution status, and the triggering execution status is used to indicate the target Whether the answering props and the target scene channel are integrated and displayed.
  • the answering props represent the answering content in a static visualization manner
  • the matching relationship is a match between the answering content represented by the answering props in a static visualization manner and at least one of the scene props. relation.
  • the processing module 701 is also configured to provide the virtual object based on the question information.
  • the trigger operation information of the target answer prop is marked, and the trigger operation information includes the number of trigger operations and trigger execution for the same answer content. status, the trigger execution status is used to indicate whether the target answering prop and the target scene channel are fused and displayed.
  • the answer content is preset answer content
  • the target answer props have multiple presentation periods
  • the target answer props dynamically and visually represent multiple different presets in each of the presentation periods.
  • the matching relationship is the matching relationship between each of the preset answer content and at least one of the scene props;
  • the processing module 701 is also configured to: after allocating at least one answering prop to the virtual character based on the question information, and before integrating and displaying the target answering prop and the target scene prop in response to a triggering operation for the target answering prop,
  • the target answer prop visually represents the target answer content, and in response to a triggering operation on the target answer prop, a derived answer prop that represents the target answer content in a static manner is reproduced based on the target answer prop, and the derived answer prop It is a target answer prop that statically represents the target answer content, and the target answer content is one of a plurality of preset answer contents.
  • the processing module 701 is also used to determine that the number of replications of the derived answering props representing the same target answering content is less than the preset number of replications, and the target answering props represent the target answering content.
  • the target answer props are in the allowed triggering state, and the number of scene props corresponding to each preset answer content is equal to the preset number of reproductions; determine the derived answer questions that represent the same target answer content.
  • the target answering prop is in a trigger-prohibited state.
  • the processing module 701 is also configured to delete the target answering prop when it is determined that the number of replications of the derived answering prop representing the same target answering content is equal to the preset number of replications.
  • the processing module 701 is also configured to: after allocating at least one answering prop to the virtual character based on the question information, in response to a triggering operation for the target answering prop, integrate and display the target Before the answering props and the target scene props, when it is determined that the virtual character is close to the scene props, the operation prompt information is displayed in the virtual education scene; it is determined that the virtual character sends a confirmation message in response to the operation prompt information, so The confirmation message is used to instruct the student user to accept the operation invitation for the target answering prop.
  • the number of the answering props is multiple, and the processing module 701 is also configured to: after allocating at least one answering prop to the virtual character based on the question information, the response to the target answering prop is In the triggering operation of the props, before the target answering props and the target scene props are merged and displayed, when it is determined that the virtual character is close to the scene props, the answering content represented by a plurality of the answering props is visually displayed.
  • the distance between the position of the virtual character and the position of the scene prop is less than or equal to a preset distance threshold.
  • the processing module 701 is also configured to use a route guidance method to guide the virtual character before integrating and displaying the target answering prop and the target scene prop in response to the triggering operation for the target answering prop. Move to the target prop, which is the at least one answering prop and/or the target scene prop.
  • the route guidance method includes at least one of a visual guidance method, an air wall guidance method, and an audio guidance method;
  • the target prop plays audio.
  • Figure 8 shows a schematic block diagram of a chip according to an exemplary embodiment of the present disclosure.
  • the chip 800 includes one or more (including two) processors 801 and a communication interface 802 .
  • the communication interface 802 can support the server to perform the data sending and receiving steps in the above image processing method, and the processor 801 can support the server to perform the data processing steps in the above image processing method.
  • the chip 800 also includes a memory 803.
  • the memory 803 can include a read-only memory and a random access memory, and provides operating instructions and data to the processor. Part of the memory may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the processor 801 invokes an operation instruction stored in the memory (the operation instruction may be stored in operating system), perform the corresponding operations.
  • the processor 801 controls the processing operations of any one of the terminal devices.
  • the processor may also be called a central processing unit (CPU).
  • Memory 803 may include read-only memory and random access memory, and provides instructions and data to processor 801. Portion of memory 803 may also include NVRAM.
  • the memory, communication interface and memory are coupled together through a bus system.
  • the bus system may also include a power bus, a control bus, a status signal bus, etc.
  • the various buses are labeled bus system 804 in FIG. 8 .
  • the methods disclosed in the above embodiments of the present disclosure can be applied in a processor or implemented by the processor.
  • the processor may be an integrated circuit chip that has signal processing capabilities. During the implementation process, each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the processor.
  • the above-mentioned processor can be a general-purpose processor, digital signal processing (DSP), ASIC, off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic devices, discrete gates or transistor logic. devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application-the-shelf programmable gate array
  • FPGA field-programmable gate array
  • Each disclosed method, step and logical block diagram in the embodiment of the present disclosure can be implemented or executed.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the embodiments of the present disclosure can be directly implemented by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • Exemplary embodiments of the present disclosure also provide an electronic device, including: at least one processor; and a memory communicatively connected to the at least one processor.
  • the memory stores a computer program executable by the at least one processor, and when executed by the at least one processor, the computer program is used to cause the electronic device to perform a method according to an embodiment of the present disclosure.
  • Exemplary embodiments of the present disclosure also provide a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is used to cause the computer to perform the method implemented by the present disclosure.
  • Exemplary embodiments of the present disclosure also provide a computer program product, including a computer program, wherein the computer program, when executed by a processor of a computer, is used to cause the computer to perform a method according to an embodiment of the present disclosure.
  • Electronic devices are intended to refer to various forms of digital electronic computing equipment, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are examples only and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the electronic device 900 includes a computing unit 901 that can perform calculations according to a computer program stored in a read-only memory (ROM) 902 or loaded from a storage unit 908 into a random access memory (RAM) 903 . Perform various appropriate actions and processing.
  • RAM 903 various programs and data required for the operation of the device 900 can also be stored.
  • Computing unit 901, ROM 902 and RAM 903 are connected to each other via bus 904.
  • An input/output (I/O) interface 905 is also connected to bus 904.
  • the input unit 906 may be any type of device capable of inputting information to the electronic device 900.
  • the input unit 906 may receive input numeric or character information and generate key signal input related to user settings and/or function control of the electronic device.
  • Output unit 907 may be any type of device capable of presenting information, and may include, but is not limited to, a display, speakers, video/audio output terminal, vibrator, and/or printer.
  • the storage unit 904 may include, but is not limited to, magnetic disks and optical disks.
  • the communication unit 909 allows the electronic device 900 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunications networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver and/or a chip Groups such as BluetoothTM devices, WiFi devices, WiMax devices, cellular communications devices and/or the like.
  • the computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital signal processing processor (DSP), and any appropriate processor, controller, microcontroller, etc.
  • the computing unit 901 performs the various methods and processes described above.
  • the methods of exemplary embodiments of the present disclosure may be implemented as a computer software program that is tangibly embodied in a machine-readable medium, such as storage unit 908.
  • part or all of the computer program may be loaded and/or installed onto the electronic device 900 via the ROM 902 and/or the communication unit 909.
  • computing unit 901 may be configured to perform a method in any other suitable manner (eg, by means of firmware).
  • Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, special-purpose computer, or other programmable data processing device, such that the program codes, when executed by the processor or controller, cause the functions specified in the flowcharts and/or block diagrams/ The operation is implemented.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM portable compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • machine-readable medium and “computer-readable medium” refer to any computer program product, apparatus, and/or means for providing machine instructions and/or data to a programmable processor (eg, magnetic disk, optical disk, memory, programmable logic device (PLD)), including machine-readable media that receive machine instructions as machine-readable signals.
  • machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • the systems and techniques described herein may be implemented on a computer having a display device (eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
  • a display device eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or a trackball
  • Other kinds of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and may be provided in any form, including Acoustic input, voice input or tactile input) to receive input from the user.
  • the systems and techniques described herein may be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., A user's computer having a graphical user interface or web browser through which the user can interact with implementations of the systems and technologies described herein), or including such backend components, middleware components, or any combination of front-end components in a computing system.
  • the components of the system may be interconnected by any form or medium of digital data communication (eg, a communications network). Examples of communication networks include: local area network (LAN), wide area network (WAN), and the Internet.
  • Computer systems may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact over a communications network.
  • the relationship of client and server is created by computer programs running on corresponding computers and having a client-server relationship with each other.
  • the computer program product includes one or more computer programs or instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, a terminal, a user equipment, or other programmable device.
  • the computer program or instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
  • the computer program or instructions may be transmitted from a website, computer, A server or data center transmits via wired or wireless means to another website site, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center that integrates one or more available media.
  • the available media may be magnetic media, such as floppy disks, hard disks, and magnetic tapes; they may also be optical media, such as digital video discs (DVDs); they may also be semiconductor media, such as solid state drives (solid state drives). , SSD).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

L'invention concerne un procédé et un appareil de réponse à une question, et un dispositif électronique. Le procédé consiste à : attribuer au moins un support de réponse de question représentant un contenu de réponse pour des informations de question à un personnage virtuel sur la base des informations de question, chaque support de réponse de question ayant une relation de correspondance avec au moins un support de scène, et le support de réponse de question et le support de scène ayant la relation de correspondance étant associés à une même question ; en réponse à une opération de déclenchement pour un support de réponse de question cible, fusionner et afficher le support de réponse de question cible et un support de scène cible ; et afficher un résultat de mise en correspondance du support de réponse de question cible et de le support de scène cible déterminé selon la relation de correspondance. Selon le procédé, un utilisateur étudiant peut commander le personnage virtuel pour achever une réponse au moyen d'une interaction homme-machine, de telle sorte que l'expérience immersive de l'utilisateur étudiant et l'intérêt en réponse à une question sont améliorées, l'expérience de cours de l'utilisateur étudiant est améliorée, puis l'effet d'apprentissage est amélioré.
PCT/CN2023/097736 2022-06-14 2023-06-01 Procédé et appareil de réponse à une question, et dispositif électronique WO2023241369A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210663972.3 2022-06-14
CN202210663972.3A CN114743422B (zh) 2022-06-14 2022-06-14 一种答题方法及装置和电子设备

Publications (1)

Publication Number Publication Date
WO2023241369A1 true WO2023241369A1 (fr) 2023-12-21

Family

ID=82287588

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/097736 WO2023241369A1 (fr) 2022-06-14 2023-06-01 Procédé et appareil de réponse à une question, et dispositif électronique

Country Status (2)

Country Link
CN (1) CN114743422B (fr)
WO (1) WO2023241369A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743422B (zh) * 2022-06-14 2022-08-26 北京新唐思创教育科技有限公司 一种答题方法及装置和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023693A (zh) * 2016-05-25 2016-10-12 北京九天翱翔科技有限公司 一种基于虚拟现实技术和模式识别技术的教育系统及方法
US20160350311A1 (en) * 2015-05-26 2016-12-01 Frederick Reeves Scenario-Based Interactive Behavior Modification Systems and Methods
CN110400180A (zh) * 2019-07-29 2019-11-01 腾讯科技(深圳)有限公司 基于推荐信息的显示方法、装置及存储介质
CN110604920A (zh) * 2019-09-16 2019-12-24 腾讯科技(深圳)有限公司 基于游戏的学习方法、装置、电子设备及存储介质
CN113626621A (zh) * 2021-06-23 2021-11-09 北京思明启创科技有限公司 一种在线互动教学的课程内容生成系统和编辑装置
CN114743422A (zh) * 2022-06-14 2022-07-12 北京新唐思创教育科技有限公司 一种答题方法及装置和电子设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010233922A (ja) * 2009-03-31 2010-10-21 Namco Bandai Games Inc プログラム、情報記憶媒体及びゲーム装置
CN101719326A (zh) * 2009-12-31 2010-06-02 博采林电子科技(深圳)有限公司 一种游戏式学习系统和方法
TWI639147B (zh) * 2016-12-29 2018-10-21 盧玉玲 Digital learning assessment system and digital learning assessment method
CN110193205B (zh) * 2019-06-28 2022-07-26 腾讯科技(深圳)有限公司 虚拟对象的成长模拟方法、装置、终端、设备及介质
CN110866847A (zh) * 2019-09-29 2020-03-06 许配显 一种基于联网游戏辅助教学运营管理方法
CN111984126A (zh) * 2020-09-21 2020-11-24 重庆虚拟实境科技有限公司 答题记录生成方法、装置、电子设备及存储介质
CN113891138B (zh) * 2021-09-27 2024-05-14 深圳市腾讯信息技术有限公司 互动操作提示方法和装置、存储介质及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160350311A1 (en) * 2015-05-26 2016-12-01 Frederick Reeves Scenario-Based Interactive Behavior Modification Systems and Methods
CN106023693A (zh) * 2016-05-25 2016-10-12 北京九天翱翔科技有限公司 一种基于虚拟现实技术和模式识别技术的教育系统及方法
CN110400180A (zh) * 2019-07-29 2019-11-01 腾讯科技(深圳)有限公司 基于推荐信息的显示方法、装置及存储介质
CN110604920A (zh) * 2019-09-16 2019-12-24 腾讯科技(深圳)有限公司 基于游戏的学习方法、装置、电子设备及存储介质
CN113626621A (zh) * 2021-06-23 2021-11-09 北京思明启创科技有限公司 一种在线互动教学的课程内容生成系统和编辑装置
CN114743422A (zh) * 2022-06-14 2022-07-12 北京新唐思创教育科技有限公司 一种答题方法及装置和电子设备

Also Published As

Publication number Publication date
CN114743422B (zh) 2022-08-26
CN114743422A (zh) 2022-07-12

Similar Documents

Publication Publication Date Title
AU2019262848B2 (en) Interactive application adapted for use by multiple users via a distributed computer-based system
JP2023179795A (ja) 弾幕処理方法、装置、電子機器及びプログラム
AU2019201980B2 (en) A collaborative virtual environment
US20120107790A1 (en) Apparatus and method for authoring experiential learning content
WO2023231989A1 (fr) Procédé et appareil d'interaction d'enseignement pour salle de classe en ligne, dispositif et support
WO2023241369A1 (fr) Procédé et appareil de réponse à une question, et dispositif électronique
Alce et al. WozARd: a wizard of Oz method for wearable augmented reality interaction—a pilot study
CN110544399A (zh) 图形化远程教学系统以及图形化远程教学方法
Yigitbas et al. Design and evaluation of a collaborative uml modeling environment in virtual reality
Gharbi Empirical Research on Developing an Educational Augmented Reality Authoring Tool
Bueckle et al. Optimizing performance and satisfaction in matching and movement tasks in virtual reality with interventions using the data visualization literacy framework
Dasgupta Surveys, collaborative art and virtual currencies: Children programming with online data
CN114882751B (zh) 一种选择题的投票方法及装置和电子设备
WO2022237702A1 (fr) Procédé et dispositif de commande pour carte interactive intelligente
KR102615441B1 (ko) Xr기반 인터랙티브 디지털 역사 문학관 시스템 및 방법
Rana Open and accessible education with virtual reality
Sarker Understanding how to translate from children’s tangible learning apps to mobile augmented reality through technical development research
KR20230127613A (ko) 메타버스 기반의 실습형 교육 시스템
Abeywardena Educational app development toolkit for teachers and learners
Wu et al. Research on the production, application and management of virtual reality in the National Palace Museum
Dromberg et al. Escape the Decision Arena: Designing and evaluating an immersive collaborative gaming experience in a cylindrical environment
Assaf et al. Cues to fast‐forward collaboration: A Survey of Workspace Awareness and Visual Cues in XR Collaborative Systems
Florea Virtual reality interface for the PATIO user involvement tool
CN115623270A (zh) 互动信息处理方法、装置、电子设备以及存储介质
Brown et al. Interactive Level Design for iOS Assignment Delivery: A Case Study

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23822938

Country of ref document: EP

Kind code of ref document: A1