CN112732084A - Future classroom interaction system and method based on virtual reality technology - Google Patents

Future classroom interaction system and method based on virtual reality technology Download PDF

Info

Publication number
CN112732084A
CN112732084A CN202110045162.7A CN202110045162A CN112732084A CN 112732084 A CN112732084 A CN 112732084A CN 202110045162 A CN202110045162 A CN 202110045162A CN 112732084 A CN112732084 A CN 112732084A
Authority
CN
China
Prior art keywords
data
information
virtual
module
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110045162.7A
Other languages
Chinese (zh)
Inventor
王亚刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Feidie Virtual Reality Technology Co ltd
Original Assignee
Xi'an Feidie Virtual Reality Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Feidie Virtual Reality Technology Co ltd filed Critical Xi'an Feidie Virtual Reality Technology Co ltd
Priority to CN202110045162.7A priority Critical patent/CN112732084A/en
Publication of CN112732084A publication Critical patent/CN112732084A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering

Abstract

The invention relates to the technical field of virtual reality, in particular to a future classroom interaction method based on a virtual reality technology, which comprises the following steps of 1) binding characters with models in a resource library; 2) collecting real-time action information of a person and real-time sound information of the person; 3) analyzing the real-time action information and the real-time sound information; 4) matching with attribute data of the model; 5) uploading; 6) distributing; 7) interaction, namely, the interaction intimacy of the models is improved by matching the action and the sound information acquired in real time with the models; the interaction of actions is completed through the collision attribute of the model, the interaction mode is improved, and the user substitution feeling is enhanced.

Description

Future classroom interaction system and method based on virtual reality technology
Technical Field
The invention relates to the technical field of virtual reality, in particular to a future classroom interaction system and method based on a virtual reality technology.
Background
Virtual Reality (VR) refers to the artificial creation of a three-dimensional scene by means of a computer system and sensor technology, in which each object has a position and orientation relative to the system's coordinate system, and the view seen by the user is determined by the user's position and head (eye) orientation. The virtual reality creates a brand-new man-machine interaction state, brings more real and immersive experience by transferring all senses (vision, hearing, touch, smell and the like) of a user, and is widely applied to the fields of media, social contact, education and the like.
At present, virtual reality teaching is widely applied to online course teaching, and students can learn knowledge courses in an immersive environment by watching a virtual reality classroom, so that the learning interest of the students can be improved, and the learning efficiency of the students can be improved.
The existing animation interaction form and model are single in action, the investment sense of a user is reduced, effective interaction cannot be carried out one to one, interaction is insufficient, the learning efficiency is reduced, and large-scale popularization is difficult.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a future classroom interaction system and method based on a virtual reality technology, and solves the problem of single interaction form in the prior art.
In order to achieve the purpose, the invention provides the following technical scheme: the utility model provides an interactive system based on virtual reality technique future classroom which characterized in that, includes interactive module and two at least virtual animation generating systems, virtual animation generating system includes:
a correlation module: the system comprises a database, a character database and a plurality of attribute data, wherein the database is used for storing a plurality of attribute data;
an action acquisition module: the system is used for acquiring real-time actions of the characters;
the sound collection module: the system is used for acquiring real-time voice of a person;
an information analysis module: the real-time action information and the real-time sound information are analyzed to generate real-time action parameter information and multi-azimuth sound information;
a model behavior data generation module: the real-time action parameter information is matched with the attribute data of the model to obtain virtual action data; processing the multi-azimuth sound information to generate virtual sound data; meanwhile, combining the virtual sound data and the virtual action data to generate model behavior data;
a transmission module: the transfer module is used for uploading the model behavior data to the transfer module;
a transfer module: the model behavior data are received and then distributed to users needing interaction;
an interaction module: and generating a virtual animation consistent with the character behavior from the model behavior data received by the user for displaying, simultaneously making a feedback behavior on the displayed virtual animation by the user, generating corresponding model behavior data from the feedback behavior through an animation generation system, sending the model behavior data to the user needing to receive the feedback, and finally performing virtual animation display in an interaction module to circulate until the interaction is completed.
Further defined, the information parsing module includes:
an action analysis module: the real-time action parameter information comprises action part information, displacement information and angle information, and the attribute data comprises skeleton information, animation information and edge information;
a sound analysis module: the real-time sound information is analyzed into sound information with a plurality of different directions, and multi-direction sound information is generated;
the model behavior data generation module comprises:
a matching module: binding the action part information with the skeleton information of the model, and simultaneously matching the single displacement information and the single angle information with the single animation information of the model to obtain single virtual action data;
a sound generation module: the system comprises a processor, a memory, a display and a display, wherein the processor is used for encoding multi-azimuth sound information to generate virtual sound data;
combining the modules: for combining the single virtual action data with the virtual sound data, generating model behavior data.
Further defined, the sound parsing module includes:
a filtering module: the real-time sound information filtering and denoising device is used for filtering and denoising real-time sound information to obtain denoised sound data;
a three-dimensional module: the system is used for carrying out multi-azimuth processing on the noise reduction sound data to obtain multi-azimuth sound information;
the model behavior data generation module further comprises:
an action fusion module: the system comprises a data processing module, a data processing module and a data processing module, wherein the data processing module is used for fusing a plurality of groups of single virtual action data to obtain combined virtual action data of synchronous actions;
the combined module specifically comprises: the virtual sound data generating device is used for combining the single virtual motion data or the combined virtual motion data with the corresponding virtual sound data to generate model behavior data.
Further defined, the model behavior data generation module further comprises:
a collision analysis module: analyzing the collision information of the single virtual motion data or the combined virtual motion data and the model, and judging whether the single virtual motion data or the combined virtual motion data collides with the object model in the spatial domain; if not, directly uploading to the combined module; if so, binding the object model with the skeleton information to obtain single virtual action data or combined virtual action data with the motion information of the object model, and then uploading the single virtual action data or the combined virtual action data to the combined module;
an auxiliary analysis module: comparing and analyzing the model behavior data and the edge information of the model, and judging whether the model behavior data conflicts with a space domain to be generated or not; if yes, sending a prompt and intercepting the model behavior data; and if not, uploading to the transmission module.
Further limited, the transmission module comprises a compression module and an uploading module, and the transfer module comprises a receiving module, a distribution module and a sending module;
the compression module is used for compressing the model behavior data to obtain animation data, so that the transmission efficiency is improved;
the uploading module is used for uploading the animation data to the transferring module;
the receiving module is used for receiving the animation data;
the distribution module is used for distributing the animation data to different users according to requirements;
and the sending module is used for sending the distributed animation data to the corresponding user side.
A future classroom interaction method based on virtual reality technology is characterized by comprising the following steps:
1) virtual animation generation:
1.1) binding a character with a model in a resource library, wherein the model has a plurality of attribute data;
1.2) acquiring real-time action information of a person and real-time sound information of the person;
1.3) analyzing the real-time action information and the real-time sound information to generate real-time action parameter information and multi-azimuth sound information;
1.4) matching the real-time action parameter information with the attribute data of the model to obtain virtual action data; processing the multi-azimuth sound information to generate virtual sound data; meanwhile, combining the virtual sound data and the virtual action data to generate model behavior data;
1.5) uploading the model action data and the stereo sound data to a transfer system;
1.6) receiving the model action data and the stereo sound data and distributing the data to different users;
2) interaction
And generating a virtual animation consistent with the character behavior from the model behavior data received by the user for displaying, simultaneously making a feedback behavior on the displayed virtual animation by the user, generating corresponding model behavior data from the feedback behavior through an animation generation system, sending the model behavior data to the user needing to receive the feedback, and finally performing virtual animation display in an interaction module to circulate until the interaction is completed.
Further, the step 1.3) and the step 1.4) are specifically respectively as follows:
1.3) analyzing the real-time action information into real-time action parameter information, wherein the real-time action parameter information comprises action part information, displacement information and angle information, and the attribute data comprises skeleton information, animation information and edge information; processing the real-time sound information into sound information of a plurality of different directions to generate multi-directional sound information;
1.4) binding the action part information with the skeleton information of the model, and simultaneously matching single displacement information and single angle information with single animation information of the model to obtain single virtual action data; coding the multi-azimuth sound information to generate virtual sound data; the single virtual motion data is combined with the virtual sound data to generate model behavior data.
Further limiting, the multi-directional sound information generation in the step 1.3) is specifically:
filtering and denoising the real-time sound information to obtain denoised sound data, and performing multi-azimuth processing on the denoised sound data to obtain multi-azimuth sound information;
the step 1.4) further comprises the following steps:
1.4) binding the action part information with the skeleton information of the model, simultaneously matching a plurality of displacement information and a plurality of angle information with a plurality of animation information of the model to obtain a plurality of virtual action data, and fusing the plurality of virtual action data to obtain combined virtual action data of synchronous action; and combining the single virtual motion data or the combined virtual motion data with the corresponding virtual sound data to generate model behavior data.
Further limited, the step 1.4) further comprises:
analyzing the collision information of the single virtual motion data or the combined virtual motion data and the model, and judging whether the single virtual motion data or the combined virtual motion data collides with the object model in the spatial domain; if not, directly combining; if so, binding the object model with the skeleton information to obtain single virtual action data or combined virtual action data with the motion information of the object model, and then combining;
comparing and analyzing the model behavior data and the edge information of the model, and judging whether the model behavior data conflicts with a space domain to be generated or not; if yes, sending a prompt and intercepting the model behavior data; if not, step 5) is executed.
Further, the specific steps of step 1.5) and step 1.6) are as follows:
1.5.1) compressing the model behavior data to obtain animation data and improve the transmission efficiency;
1.5.2) uploading the animation data to a transfer system;
1.6.1) receiving animation data;
1.6.2) distributing the animation data to different users according to requirements;
1.6.3) sending the distributed animation data to the corresponding user terminal.
The invention has the beneficial effects that:
1. by matching the action and the sound information acquired in real time with the model, the intimacy of the interaction of the model is improved; the interaction of actions is completed through the collision attribute of the model, so that the interaction mode is improved, and the user substitution feeling is enhanced;
2. different animations are distributed to different users, so that the interaction can have pertinence;
3. bidirectional real-time communication can be completed through interaction, communication is enhanced, interaction efficiency is improved, and therefore learning efficiency is improved.
Drawings
FIG. 1 is a schematic system diagram of the present embodiment 1;
FIG. 2 is a schematic diagram of a virtual animation generation system for generating single virtual motion data in this embodiment 1;
FIG. 3 is a schematic diagram of a virtual animation generation system for combining virtual motion data according to this embodiment 1;
FIG. 4 is a flowchart of the method of the present embodiment 2;
FIG. 5 is a flowchart of virtual animation generation of single virtual motion data in this embodiment 2;
fig. 6 is a flowchart of virtual animation generation according to embodiment 2 by combining virtual motion data.
Detailed Description
Example 1
Referring to fig. 1 to 3, the present invention relates to a future classroom interaction system based on virtual reality technology, including:
a correlation module: the system comprises a database, a database and a plurality of attribute data, wherein the database is used for storing a plurality of attribute data;
an action acquisition module: the system is used for acquiring real-time action information of a person;
the sound collection module: the system is used for acquiring real-time voice information of a person;
an information analysis module: the real-time action analysis module is used for analyzing the real-time action information and the real-time sound information to generate real-time action parameter information and multi-azimuth sound information and specifically comprises an action analysis module and a sound analysis module;
an action analysis module: the real-time action parameter information comprises action part information, displacement information and angle information, and the attribute data comprises skeleton information, animation information and edge information;
a sound analysis module: the system is used for processing real-time sound information into sound information in a plurality of different directions and generating multi-direction sound information, and specifically comprises a filtering module and a three-dimensional module;
a filtering module: the real-time sound information filtering and denoising device is used for filtering and denoising real-time sound information to obtain denoised sound data;
a three-dimensional module: the multi-azimuth processing is carried out on the noise reduction sound data to generate multi-azimuth sound information;
a model behavior data generation module: the real-time action parameter information is matched with the attribute data of the model to obtain virtual action data; processing the multi-azimuth sound information to generate virtual sound data; meanwhile, the virtual sound data and the virtual action data are combined to generate model behavior data, and the model behavior data specifically comprise a matching module, a sound generation module, a combination module, an auxiliary analysis module and an action fusion module:
a matching module: binding action part information with skeleton information of a model, and simultaneously matching single displacement information and single angle information with single animation information of the model to obtain single virtual action data;
a sound generation module: the system comprises a processor, a memory, a display and a display, wherein the processor is used for encoding multi-azimuth sound information to generate virtual sound data;
an action fusion module: the system comprises a data processing module, a data processing module and a data processing module, wherein the data processing module is used for fusing a plurality of groups of single virtual action data to obtain combined virtual action data of synchronous actions;
a collision analysis module: analyzing the collision information of the single virtual motion data or the combined virtual motion data and the model, and judging whether the single virtual motion data or the combined virtual motion data collides with the object model in the spatial domain; if not, directly uploading to the combined module; if so, binding the object model with the skeleton information to obtain single virtual action data or combined virtual action data with the motion information of the object model, and then uploading the single virtual action data or the combined virtual action data to the combined module;
combining the modules: the virtual sound data generating device is used for combining the single virtual motion data or the combined virtual motion data with the corresponding virtual sound data to generate model behavior data;
an auxiliary analysis module: the device is used for comparing and analyzing the model behavior data and the edge information of the model and judging whether the model behavior data conflicts with a space domain to be generated or not; if yes, sending a prompt and intercepting the model behavior data; if not, uploading to a transmission module;
a transmission module: the system comprises a transfer module, a compression module and an uploading module, wherein the transfer module is used for uploading model behavior data to the transfer module; the compression module is used for compressing the model behavior data to obtain compressed model behavior data, and transmission efficiency is improved; the uploading module is used for uploading the behavior data of the compression model to the transferring module;
a transfer module: the system comprises a receiving module, an allocating module and a sending module, wherein the receiving module is used for receiving and allocating model behavior data to different users; the receiving module is used for receiving the compression model behavior data; the distribution module is used for distributing the compression model behavior data to different users according to requirements; the sending module is used for sending the distributed compression model behavior data to the corresponding user side;
an interaction module: and generating a virtual animation consistent with the character behavior from the model behavior data received by the user for displaying, simultaneously making a feedback behavior on the displayed virtual animation by the user, generating corresponding model behavior data from the feedback behavior through an animation generation system, sending the model behavior data to the user needing to receive the feedback, and finally performing virtual animation display in an interaction module to circulate until the interaction is completed.
Example 2
Referring to fig. 4 to 6, the invention relates to an animation display method in a future classroom based on a virtual reality technology, which comprises the following steps:
1) virtual animation generation:
1.1) binding the character with a model in a resource library, wherein the model has a plurality of attribute data, such as binding the character with a cartoon teacher model in the model library;
1.2) acquiring real-time action information of a person and real-time sound information of the person, such as arm-raising and leg-raising actions and explained sound;
1.3) analyzing the real-time action information into real-time action parameter information, wherein the real-time action parameter information comprises action part information, displacement information and angle information, and the attribute data comprises skeleton information, animation information and edge information; filtering and denoising the real-time sound information to obtain denoised sound data; performing multi-azimuth processing on the noise reduction sound data to obtain multi-azimuth sound information, for example, analyzing the arm and leg raising actions into information of arm and leg action parts, arm and leg displacement information and arm and leg rotation angle information, filtering noise in the explanation sound to obtain noise reduction sound data, so as to improve sound definition, performing multi-azimuth processing on the noise reduction sound data to obtain a plurality of multi-azimuth sound information in different directions, and conforming to the common general knowledge of large and small distance;
1.4) binding the action part information with the skeleton information of the model, matching single displacement information and single angle information with single animation information of the model to obtain single virtual action data, for example binding the arm of a figure with the arm skeleton information of the model, binding the leg of the figure with the leg skeleton information of the model, matching the displacement information and the angle information of the arm-raising with the arm-raising animation information of the model, matching the displacement information and the angle information of the leg-raising with the leg-raising animation information of the model to obtain virtual action data for sequentially raising the arm and the leg;
when the animation information is carried out simultaneously in a plurality of groups, binding action part information with skeleton information of the model, simultaneously matching a plurality of pieces of displacement information and a plurality of pieces of angle information with a plurality of pieces of animation information of the model to obtain a plurality of pieces of virtual action data, fusing the plurality of pieces of virtual action data to obtain combined virtual action data of synchronous actions, for example, carrying out leg raising and arm raising simultaneously, fusing the generated arm raising virtual action data and the leg raising virtual action data to obtain combined virtual action data of synchronous actions of leg raising and arm raising;
encoding the multi-azimuth sound information to generate virtual sound data, for example, encoding the multi-azimuth sound information of a plurality of different azimuths explaining the sound to generate virtual sound data with a stereoscopic effect;
analyzing the collision information of the single virtual motion data or the combined virtual motion data and the model, and judging whether the single virtual motion data or the combined virtual motion data collides with the object model in the spatial domain; if not, directly uploading to the combined module; if so, binding the object model with the skeleton information to obtain single virtual action data or combined virtual action data with the motion information of the object model, uploading the single virtual action data or the combined virtual action data to a combination module, for example, comparing and analyzing the virtual actions of arm lifting and leg lifting with the collision information of the model, judging whether the actions of arm lifting and leg lifting are in contact with the blackboard eraser model, if not, carrying out the subsequent combination step, if the arm lifting is in contact with the blackboard eraser model, binding the blackboard eraser model with the skeleton information to obtain the virtual action data of the blackboard eraser model moving along with the arm lifting action, and then carrying out the subsequent combination step;
generating model behavior data by combining the single virtual motion data or the combined virtual motion data with the corresponding virtual sound data, and generating model behavior data by combining the single virtual motion data with the virtual sound data, for example, generating model behavior data by combining the generated virtual motion data of the arm-raising and leg-raising performed in sequence or the combined virtual motion data of the leg-raising and arm-raising performed in synchronization with the virtual sound data;
comparing and analyzing the model behavior data and the edge information of the model, and judging whether the model behavior data conflicts with a space domain to be generated or not; if yes, sending a prompt and intercepting the model behavior data; if not, executing step 5), for example, comparing and analyzing the virtual actions of arm raising and leg raising with the edge information of the model, judging whether the actions of arm raising and hand raising are overlapped or crossed with the space to cause visual interference, if so, reminding, not executing the next step according to the model behavior data, and if not, executing the next step;
1.5) uploading the model behavior data to a transfer system:
1.5.1) compressing the model behavior data to obtain animation data, for example, reducing the resolution and frame rate in the virtual action data to reduce the size of a file, thereby improving the transmission efficiency;
1.5.2) uploading the animation data to a transfer system, so that uniform distribution is facilitated;
1.6) receiving the model motion data and the stereo sound data and distributing the model motion data and the stereo sound data to different users:
1.6.1) receiving the animation data and preparing distribution;
1.6.2) distributing the animation data to different users according to requirements, for example, distributing the first-grade course animation data to a first-grade user, and distributing the second-grade course animation data to a second-grade user, wherein the first-grade course animation data contains arm-raising actions, and the first-grade course animation data contains arm-raising and leg-raising synchronous actions;
1.6.3) sending the distributed animation data to a corresponding user side, for example, sending the course animation data distributed to a first-grade user to the first-grade user, and sending the course animation data of a second-grade to the second-grade user;
2) interaction:
generating a virtual animation consistent with character behaviors from model behavior data received by a user for display, generating corresponding model behavior data from the feedback behavior by an animation generation system, sending the data to the user needing to receive feedback, and finally performing virtual animation display in an interaction module, wherein the process is circulated until the interaction is completed, for example, a teacher prepares to deliver an eraser model to a student A in a year-class user, the teacher model correspondingly completes the actions of lifting arms and taking the eraser, the actions of delivering the eraser and the sound in a space domain of a future classroom and sends the actions and the sound to the student A, when the student A listens to the roll call of the teacher and sees the animation of the eraser model delivered by the teacher model, the student A correspondingly lifts the hands to contact the eraser in the space domain of the future classroom, and when the teacher sees the student A model to receive the eraser, after the hands are released, the blackboard eraser moves along with the hands of the student A model, and interaction of submitting the blackboard eraser is finished.

Claims (10)

1. The utility model provides an interactive system based on virtual reality technique future classroom which characterized in that, includes interactive module and two at least virtual animation generating systems, virtual animation generating system includes:
a correlation module: the system comprises a database, a character database and a plurality of attribute data, wherein the database is used for storing a plurality of attribute data;
an action acquisition module: the system is used for acquiring real-time actions of the characters;
the sound collection module: the system is used for acquiring real-time voice of a person;
an information analysis module: the real-time action information and the real-time sound information are analyzed to generate real-time action parameter information and multi-azimuth sound information;
a model behavior data generation module: the real-time action parameter information is matched with the attribute data of the model to obtain virtual action data; processing the multi-azimuth sound information to generate virtual sound data; meanwhile, combining the virtual sound data and the virtual action data to generate model behavior data;
a transmission module: the transfer module is used for uploading the model behavior data to the transfer module;
a transfer module: the model behavior data are received and then distributed to users needing interaction;
an interaction module: and generating a virtual animation consistent with the character behavior from the model behavior data received by the user for displaying, simultaneously making a feedback behavior on the displayed virtual animation by the user, generating corresponding model behavior data from the feedback behavior through an animation generation system, sending the model behavior data to the user needing to receive the feedback, and finally performing virtual animation display in an interaction module to circulate until the interaction is completed.
2. The virtual reality technology future classroom based interaction system as claimed in claim 1, wherein said information resolution module comprises:
an action analysis module: the real-time action parameter information comprises action part information, displacement information and angle information, and the attribute data comprises skeleton information, animation information and edge information;
a sound analysis module: the real-time sound information is analyzed into sound information with a plurality of different directions, and multi-direction sound information is generated;
the model behavior data generation module comprises:
a matching module: binding the action part information with the skeleton information of the model, and simultaneously matching the single displacement information and the single angle information with the single animation information of the model to obtain single virtual action data;
a sound generation module: the system comprises a processor, a memory, a display and a display, wherein the processor is used for encoding multi-azimuth sound information to generate virtual sound data;
combining the modules: for combining the single virtual action data with the virtual sound data, generating model behavior data.
3. The virtual reality technology future classroom based interaction system as claimed in claim 2, wherein said sound analysis module comprises:
a filtering module: the real-time sound information filtering and denoising device is used for filtering and denoising real-time sound information to obtain denoised sound data;
a three-dimensional module: the system is used for carrying out multi-azimuth processing on the noise reduction sound data to obtain multi-azimuth sound information;
the model behavior data generation module further comprises:
an action fusion module: the system comprises a data processing module, a data processing module and a data processing module, wherein the data processing module is used for fusing a plurality of groups of single virtual action data to obtain combined virtual action data of synchronous actions;
the combined module specifically comprises: the virtual sound data generating device is used for combining the single virtual motion data or the combined virtual motion data with the corresponding virtual sound data to generate model behavior data.
4. The virtual reality technology future classroom based interaction system of claim 3, wherein said model behavior data generation module further comprises:
a collision analysis module: analyzing the collision information of the single virtual motion data or the combined virtual motion data and the model, and judging whether the single virtual motion data or the combined virtual motion data collides with the object model in the spatial domain; if not, directly uploading to the combined module; if so, binding the object model with the skeleton information to obtain single virtual action data or combined virtual action data with the motion information of the object model, and then uploading the single virtual action data or the combined virtual action data to the combined module;
an auxiliary analysis module: comparing and analyzing the model behavior data and the edge information of the model, and judging whether the model behavior data conflicts with a space domain to be generated or not; if yes, sending a prompt and intercepting the model behavior data; and if not, uploading to the transmission module.
5. The virtual reality technology future classroom-based interaction system of claim 4, wherein the transmission module comprises a compression module and an upload module, and the relay module comprises a receiving module, a distribution module and a sending module;
the compression module is used for compressing the model behavior data to obtain animation data, so that the transmission efficiency is improved;
the uploading module is used for uploading the animation data to the transferring module;
the receiving module is used for receiving the animation data;
the distribution module is used for distributing the animation data to different users according to requirements;
and the sending module is used for sending the distributed animation data to the corresponding user side.
6. A future classroom interaction method based on virtual reality technology is characterized by comprising the following steps:
1) virtual animation generation:
1.1) binding a character with a model in a resource library, wherein the model has a plurality of attribute data;
1.2) acquiring real-time action information of a person and real-time sound information of the person;
1.3) analyzing the real-time action information and the real-time sound information to generate real-time action parameter information and multi-azimuth sound information;
1.4) matching the real-time action parameter information with the attribute data of the model to obtain virtual action data; processing the multi-azimuth sound information to generate virtual sound data; meanwhile, combining the virtual sound data and the virtual action data to generate model behavior data;
1.5) uploading the model action data and the stereo sound data to a transfer system;
1.6) receiving the model action data and the stereo sound data and distributing the data to different users;
2) interaction
And generating a virtual animation consistent with the character behavior from the model behavior data received by the user for displaying, simultaneously making a feedback behavior on the displayed virtual animation by the user, generating corresponding model behavior data from the feedback behavior through an animation generation system, sending the model behavior data to the user needing to receive the feedback, and finally performing virtual animation display in an interaction module to circulate until the interaction is completed.
7. The virtual reality technology-based future classroom interaction method according to claim 6, wherein the step 1.3) and the step 1.4) are specifically and respectively:
1.3) analyzing the real-time action information into real-time action parameter information, wherein the real-time action parameter information comprises action part information, displacement information and angle information, and the attribute data comprises skeleton information, animation information and edge information; processing the real-time sound information into sound information of a plurality of different directions to generate multi-directional sound information;
1.4) binding the action part information with the skeleton information of the model, and simultaneously matching single displacement information and single angle information with single animation information of the model to obtain single virtual action data; coding the multi-azimuth sound information to generate virtual sound data; the single virtual motion data is combined with the virtual sound data to generate model behavior data.
8. The virtual reality technology-based future classroom interaction method according to claim 7, wherein the multi-directional sound information generation in step 1.3) is specifically:
filtering and denoising the real-time sound information to obtain denoised sound data, and performing multi-azimuth processing on the denoised sound data to obtain multi-azimuth sound information;
the step 1.4) further comprises the following steps:
1.4) binding the action part information with the skeleton information of the model, simultaneously matching a plurality of displacement information and a plurality of angle information with a plurality of animation information of the model to obtain a plurality of virtual action data, and fusing the plurality of virtual action data to obtain combined virtual action data of synchronous action; and combining the single virtual motion data or the combined virtual motion data with the corresponding virtual sound data to generate model behavior data.
9. The virtual reality technology future classroom based interaction method as claimed in claim 8, wherein said step 1.4) further comprises:
analyzing the collision information of the single virtual motion data or the combined virtual motion data and the model, and judging whether the single virtual motion data or the combined virtual motion data collides with the object model in the spatial domain; if not, directly combining; if so, binding the object model with the skeleton information to obtain single virtual action data or combined virtual action data with the motion information of the object model, and then combining;
comparing and analyzing the model behavior data and the edge information of the model, and judging whether the model behavior data conflicts with a space domain to be generated or not; if yes, sending a prompt and intercepting the model behavior data; if not, step 1.5) is executed.
10. The virtual reality technology future classroom-based interaction method according to claim 9, wherein the specific steps of step 1.5) and step 1.6) are as follows:
1.5.1) compressing the model behavior data to obtain animation data and improve the transmission efficiency;
1.5.2) uploading the animation data to a transfer system;
1.6.1) receiving animation data;
1.6.2) distributing the animation data to different users according to requirements;
1.6.3) sending the distributed animation data to the corresponding user terminal.
CN202110045162.7A 2021-01-13 2021-01-13 Future classroom interaction system and method based on virtual reality technology Pending CN112732084A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110045162.7A CN112732084A (en) 2021-01-13 2021-01-13 Future classroom interaction system and method based on virtual reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110045162.7A CN112732084A (en) 2021-01-13 2021-01-13 Future classroom interaction system and method based on virtual reality technology

Publications (1)

Publication Number Publication Date
CN112732084A true CN112732084A (en) 2021-04-30

Family

ID=75592124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110045162.7A Pending CN112732084A (en) 2021-01-13 2021-01-13 Future classroom interaction system and method based on virtual reality technology

Country Status (1)

Country Link
CN (1) CN112732084A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN108200446A (en) * 2018-01-12 2018-06-22 北京蜜枝科技有限公司 Multimedia interactive system and method on the line of virtual image
CN108805766A (en) * 2018-06-05 2018-11-13 陈勇 A kind of AR body-sensings immersion tutoring system and method
CN109829976A (en) * 2018-12-18 2019-05-31 武汉西山艺创文化有限公司 One kind performing method and its system based on holographic technique in real time
CN110488975A (en) * 2019-08-19 2019-11-22 深圳市仝智科技有限公司 A kind of data processing method and relevant apparatus based on artificial intelligence
CN111986297A (en) * 2020-08-10 2020-11-24 山东金东数字创意股份有限公司 Virtual character facial expression real-time driving system and method based on voice control

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN108200446A (en) * 2018-01-12 2018-06-22 北京蜜枝科技有限公司 Multimedia interactive system and method on the line of virtual image
CN108805766A (en) * 2018-06-05 2018-11-13 陈勇 A kind of AR body-sensings immersion tutoring system and method
CN109829976A (en) * 2018-12-18 2019-05-31 武汉西山艺创文化有限公司 One kind performing method and its system based on holographic technique in real time
CN110488975A (en) * 2019-08-19 2019-11-22 深圳市仝智科技有限公司 A kind of data processing method and relevant apparatus based on artificial intelligence
CN111986297A (en) * 2020-08-10 2020-11-24 山东金东数字创意股份有限公司 Virtual character facial expression real-time driving system and method based on voice control

Similar Documents

Publication Publication Date Title
CN111526118B (en) Remote operation guiding system and method based on mixed reality
CN109636919B (en) Holographic technology-based virtual exhibition hall construction method, system and storage medium
CN111897431B (en) Display method and device, display equipment and computer readable storage medium
WO2014006642A4 (en) User-controlled 3d simulation for providing realistic and enhanced digital object viewing and interaction experience
CN103916621A (en) Method and device for video communication
CN106327942A (en) Distributed electric power training system based on virtual reality
CN113867531A (en) Interaction method, device, equipment and computer readable storage medium
CN110691010B (en) Cross-platform and cross-terminal VR/AR product information display system
CN112070865A (en) Classroom interaction method and device, storage medium and electronic equipment
CN111383642A (en) Voice response method based on neural network, storage medium and terminal equipment
CN114237540A (en) Intelligent classroom online teaching interaction method and device, storage medium and terminal
CN114063784A (en) Simulated virtual XR BOX somatosensory interaction system and method
CN114169546A (en) MR remote cooperative assembly system and method based on deep learning
CN112596611A (en) Virtual reality role synchronous control method and control device based on somatosensory positioning
Valentini Natural interface in augmented reality interactive simulations: This paper demonstrates that the use of a depth sensing camera that helps generate a three-dimensional scene and track user's motion could enhance the realism of the interactions between virtual and physical objects
CN112732084A (en) Future classroom interaction system and method based on virtual reality technology
CN112862931A (en) Animation display system and method in future classroom based on virtual reality technology
CN112862932A (en) System and method for converting character behaviors into virtual animation
CN210804322U (en) Virtual and real scene operation platform based on 5G and MR technology
CN115379278A (en) XR technology-based immersive micro-class recording method and system
CN113612985A (en) Processing method of interactive VR image
CN112820158A (en) Virtual reality-based network teaching method and system, storage medium and platform
CN110070777B (en) Huchizhui fish skin painting simulation training system and implementation method
CN104516484A (en) Realistic distance education using virtual reality technology
CN115880441B (en) 3D visual simulated character generation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination