NL2026359B1 - Method for multi-channel fusion and presentation of virtual learning environment oriented to field practice teaching - Google Patents

Method for multi-channel fusion and presentation of virtual learning environment oriented to field practice teaching Download PDF

Info

Publication number
NL2026359B1
NL2026359B1 NL2026359A NL2026359A NL2026359B1 NL 2026359 B1 NL2026359 B1 NL 2026359B1 NL 2026359 A NL2026359 A NL 2026359A NL 2026359 A NL2026359 A NL 2026359A NL 2026359 B1 NL2026359 B1 NL 2026359B1
Authority
NL
Netherlands
Prior art keywords
learning environment
sound
student
channel
scene
Prior art date
Application number
NL2026359A
Other languages
Dutch (nl)
Other versions
NL2026359A (en
Inventor
Yang Zongkai
Wu Ke
Zhong Zheng
Wu Di
Original Assignee
Univ Central China Normal
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Central China Normal filed Critical Univ Central China Normal
Publication of NL2026359A publication Critical patent/NL2026359A/en
Application granted granted Critical
Publication of NL2026359B1 publication Critical patent/NL2026359B1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The present invention pertains to the field of teaching applications of a virtual reality technology, and provides a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching. The method includes three steps: content generation, fusion of visual and auditory channels, and multi-channel interaction design. In the present invention, a set of methods for data acquisition, knowledge organization, and scene switching is established according to characteristics of teaching content in a virtual learning environment; synchronous updating of visual and auditory channels is implemented in a spatial rendering mode; and input and output priorities of various interactive channels are evaluated, and multi-sensory cooperative interaction of a student in the virtual learning environment is completed. By adding an auditory cue, adding a mode of determining multi-channel user interaction, and implementing fusion and presentation of the virtual learning environment, the present invention can enhance realism of the learning environment and improve immersive experience of a participant.

Description

METHOD FOR MULTI-CHANNEL FUSION AND PRESENTATION OF VIRTUAL LEARNING ENVIRONMENT ORIENTED TO FIELD PRACTICE TEACHING
TECHNICAL FIELD The present invention pertains to the field of teaching applications of a virtual reality (Virtual Reality, VR) technology, and more specifically, relates to a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching.
BACKGROUND Field practice is an important step of practice for training specialists in subjects of geography, geology, biology, and so on, and is an important educational activity for training students to link theory with practice, master basic knowledge and basic skills of the subjects, and improve overall quality and practical and innovative abilities. However, currently, there are many problems in field practice. For example, there is a shortage of teachers having a solid foundation in taxonomy and rich experience in field practice; many students participate in practice within a short time, and it is difficult for a teacher to provide "one-to-one" guidance; due to single practice content and a single practice mode, practice basically stays on recognition of species and acquisition and preparation of specimens, and sharing and interactivity of practice achievements are poor; specimen acquisition conflicts with environmental protection; due to changes of seasons, weather, and biotopes, many things that should be completed in field practice can hardly be completed; in field practice, there are natural disasters such as a torrential flood, a landslide, and a debris flow, and many safety risks such as insect stings and snake bites, falls, and sunstrokes in summer, and consequently field practice effects of the students are affected. Building a field practice environment by applying a VR technology can break time and space limits. A student can immersively carry out field practice indoors, and learning can be repeated infinitely, until an ideal effect is achieved. This will be a beneficial supplement to field practice teaching, and can not only effectively solve many difficulties encountered in field practice, but also greatly enhance interest of the student in learning. As a commercial 5G network is quickly popularized, performance bottlenecks such as an ultra-high resolution, a full view,
and a low latency required by VR content will be largely solved. A virtual learning environment oriented to field practice teaching will have broad application prospects.
Currently, although a realistic field practice virtual learning environment can be quickly built in a panoramic mode, actual requirements of field practice teaching still cannot be fully satisfied. For example, using field practice of biology as an example, pictures of communication behavior, social behavior, and reproductive behavior between animals may be captured in a VR panoramic video, but it is difficult to convey biological behavior implied therein to a student. For example, a vocal mechanism of an animal, sound signal characteristics, and sound wave reception, processing, and recognition cannot be easily conveyed in a picture. Through multi-camera sound synchronization processing, the student can feel differences between sounds from all directions. Although a natural realistic effect can already be presented by a sound source change in panoramic sound simulation, this is a case in which cameras are relatively fixed, and auditory omnidirectional presentation requirements in field practice teaching can hardly be satisfied.
SUMMARY In view of the foregoing disadvantages or improvement requirements of the prior art, the present invention provides a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching, and centering on requirements of virtual simulation teaching in field practice teaching, provides a solution to content generation, audiovisual fusion, and cooperative interaction. A set of methods for data acquisition, knowledge organization, and scene switching is established according to characteristics of teaching content in a virtual learning environment; synchronous updating of visual and auditory channels is implemented in a spatial rendering mode; and input and output priorities of various interactive channels are evaluated, and multi-sensory cooperative interaction of a student in the virtual learning environment is completed.
Objectives of the present invention are achieved by the following technical measures. A method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching includes three steps: content generation, fusion of visual and auditory channels, and multi-channel interaction design. A) Content generation: To satisfy a requirement of field practice teaching, complete VR panoramic content acquisition in a practice area by using a combination of aerial photographing and terrestrial acquisition, establish an organizational mode of knowledge elements in different layers and areas in a virtual learning environment, and complete optimization of a scene-to-scene jumping effect.
A-a) Data acquisition: To reproduce a field practice teaching process realistically, acquire teaching information in a field practice area from two layers—terrestrial observation points and aerial photographing areas, and complete digitization in a VR panoramic video mode.
A-a-i) Acquisition of terrestrial observation point information: For terrestrial observation practice content, use a high-definition motion camera group to capture dynamic images from all angles, implement high-density multi-angle field real information acquisition, and obtain complete material information of a field practice scene.
A-a-ii) Acquisition of aerial photographing information by using an unmanned aerial vehicle: For practice content such as observation of an aerial view and vertical distribution of biotopes in a macro-scale field practice area, take photos of biotopes of aerial photographing areas in different ecotopes by using the unmanned aerial vehicle, to obtain material information of a full field of vision.
A-a-iii) Mapping therebetween, where an acquisition point of aerial photographing by the unmanned aerial vehicle needs to correspond to content of a terrestrial observation point, that is, when panoramic aerial photographing content in one area is acquired once, information data of a plurality of terrestrial observation points needs to be acquired correspondingly.
A-b) Data organization: Establish an aggregation mode between knowledge elements in different layers and different areas according to a progressive relationship and an association between teaching content; and fuse subject knowledge content and a practice route according to a field practice process routine.
A-b-i) Acquisition point annotation: Use an electronic map as a basic geographic data platform, use different symbols to represent VR panoramic acquisition points of terrestrial observation and aerial photographing by the unmanned aerial vehicle, and annotate the VR panoramic acquisition points on the electronic map according to spatial positions. A-b-ii) Vertical association: Establish an association relationship between an aerial photographing scene and a terrestrial acquisition point in the virtual learning environment by using a pyramid hierarchical structure model, and implement fast switching from a macro scene to a micro object. A-b-iii) Horizontal association: In a sandbox model of a terrain and landform of a practice area, combine ecotope aerial photographing points, terrestrial observation points, and subject knowledge points according to a moving route of field practice, to form different survey routes. A-c) Scene transition: In field practice teaching, an association exists between an internship site and content. To reduce dizziness of a student in a VR scene switching process, for a mutual relationship between an internship site and content, design a solution to optimization of a scene-to-scene jumping and switching effect. A-c-i) Guiding element design: An interactive interface of the virtual learning environment changes from a two-dimensional plane to a three-dimensional sphere, which is beyond a limit of a conventional display screen. Therefore, media navigation information such as a text, symbol, and voice is designed to guide the student to a broader field of vision and let the student pay attention to important learning content.
A-c-ii) Scene switching: According to geographically relative positions of two scenes, add an indicative icon of a target switching point to a previous scene as an entry for jumping to a next scene, where a pattern of the icon may be designed according to a scene background. 5 A-c-iii) Transition optimization: With respect to a great difference in picture color, brightness, or content during scene switching, use similar fusion, gradient fusion, and highlighting modes to solve a phenomenon of a visual mutation. B) Fusion of visual and auditory channels: Represent attenuation of a learning object and a background sound source in the virtual learning environment by using a linear volume-distance attenuation method, and implement a spatial rendering mode of sounds of different objects in a VR scene; and with reference to a head tracking technology, implement synchronous updating of a panoramic video and sound during moving of a head of the student.
B-a) Spatial rendering of an audiovisual combination: Represent attenuation of an object and another background sound source in the virtual learning environment by using the linear volume-distance attenuation method in combination with a binaural positioning audio technology based on a Doppler effect model, and implement a spatial rendering mode applicable to sounds of different objects and different background sound effects in the VR scene. B-a-i) Simulation of multiple sound sources: Simulate static and dynamic point sound sources of corresponding objects in the virtual learning environment according to dynamically changing parameters such as position, direction, attenuation, and Doppler effect, and a background sound effect without parameters such as position and speed. B-a-ii) Mixing of the multiple sound sources: To simulate vocal scenes of objects (such as animals or plants) in a field real environment, mutually fuse spectrums of sounds of different objects, and generate a multi-track mix. B-a-iii) Sound attenuation effect representation: Use a combination of a logarithmic attenuation mode and a linear attenuation mode to reproduce an impact of factors such as a distance and direction in the field real environment on a sound attenuation effect, for example, use the logarithmic attenuation mode for a directional point sound source, and use the linear attenuation mode for the background sound source. B-a-iiii) Binaural positioning: Based on attributes such as sound source motion, direction, position, and structure that are reflected by sound loudness and spectrum characteristics or the like, determine a position of a sound source in the virtual learning environment relative to a position of the student according to a sound propagation principle.
B-a-iiiii) Spatial rendering: Considering a Doppler effect, render left and right sound channels with different strength according to the position of the student, and a direction, a distance, and a motion change of the sound source in the virtual learning environment.
B-b) Synchronous audio and video updating: With reference to the head tracking technology, support synchronous updating of a video picture and sound during moving of the head of the student in the virtual learning environment, and implement fusion and presentation of the visual and auditory channels.
B-b-i) Head and ear synchronization: Track a position and posture of the head of the student in the virtual learning environment in real time according to a refreshing frequency of a VR picture, redetermine the distance and direction of the sound source relative to the student, and implement synchronous rendering of a picture observed by the student and a sound heard by the student.
B-b-ii) Audiovisual fusion: Present a content scene in the virtual learning environment according to a teaching requirement, position an angle of view to corresponding content through head turning of the student, and render volume of different sound sources according to a distance between the student and the sound source of the content.
B-b-iii) Interference cancellation of the multiple sound sources: For the multiple sound sources in the virtual learning environment, use a sound source attenuation function, and simulate a sound reverberation range, thereby reducing interference factors of the multiple sound sources.
C) Multi-channel interaction design: With respect to a requirement of multi-sensory cooperative interaction of the student in the virtual learning environment, screen, determine, decide, and fuse multi-sensory interactive behavior according to corresponding parameters of interactive objects in an order of interactive task— interactive behavior—interactive experience. C-a) Interactive task design:Achieve orderly participation of interactive behavior by properly designing an interactive task, and form good interactive experience, thereby providing a good mechanism for multi-channel interaction.
C-a-i) Interactive task decomposition: During task design, a task needs to be decomposed into a temporal task and a spatial task according to temporal and spatial attributes of the task, and an interactive mode, an objective, an action, a function, and a specific process of the task are designed according to the attributes of the task. C-a-ii) Spatial task design: In comparison with another conventional multimedia learning resource, visual enhancement is an advantage of the virtual learning environment. In a process of designing a spatial interactive task, coherence of visual feedback should be always ensured, and the spatial task should also be executed preferentially during execution. C-a-iii) Temporal task design: Because this type of task has a long time unit, focus on design of auditory channel information, content of which includes background music, a feedback sound effect, and the like, and mainly consider sound information content and accuracy in an output step. C-b) Task decision: After multi-channel information is input, first determine a cooperative relationship therebetween, and complete fusion of the input multi- channel information; and then determine a weight and reliability of each piece of output information, accurately convey feedback information to sensory organs of the student, and complete multi-channel fusion. C-b-i) Input information synthesis: According to input information of channels such as visual, auditory, and tactile channels, determine a cooperative relationship between interactive actions during task execution, and complete synthesis of input information of each channel. C-b-ii) Multi-channel integration: Decide a weight of the input information of each channel to ensure that the output information is accurately conveyed to the student in the virtual learning environment, which forms a condition for multi-channel integration. C-b-iii) Multi-channel fusion: By properly allocating output information of each channel, accurately convey the feedback information to the sensory organs of the student, and complete multi-channel fusion, so that the student obtains good interactive experience. The present invention provides a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching, and centering on requirements of virtual simulation teaching in field practice teaching, provides a solution to content generation, audiovisual fusion, and cooperative interaction. A set of methods for data acquisition, knowledge organization, and scene switching is established according to characteristics of teaching content in a virtual learning environment; synchronous updating of visual and auditory channels is implemented in a spatial rendering mode; and input and output priorities of various interactive channels are evaluated, and multi-sensory cooperative interaction of a student in the virtual learning environment is completed. By adding an auditory cue, adding a mode of determining multi-channel user interaction, and implementing fusion and presentation of the virtual learning environment, the present invention can enhance realism of the learning environment and improve immersive experience of a participant.
BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a flowchart of a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a correspondence between an aerial photographing area of an unmanned aerial vehicle and a terrestrial observation point according to an embodiment of the present invention; FIG. 3 is a schematic flowchart of designing a scene hotspot according to an embodiment of the present invention; FIG. 4 is a schematic flowchart of designing scene switching according to an embodiment of the present invention; FIG. 5 is a schematic flowchart of audio and video synchronization in a virtual learning environment according to an embodiment of the present invention; FIG. 6 is a schematic flowchart of stereo audio processing according to an embodiment of the present invention; FIG. 7 is a schematic diagram of a volume-distance attenuation mode of a sound source according to an embodiment of the present invention; FIG. 8 is a schematic diagram of division of a sound field reverberation effect of a point sound source according to an embodiment of the present invention; FIG. 9 is a schematic diagram of a binaural positioning model according to an embodiment of the present invention; FIG. 10 is a schematic diagram of a spatial relationship between a student and a sound source according to an embodiment of the present invention; FIG. 11 is a schematic diagram of a layout of learning content according to an embodiment of the present invention; and FIG. 12 is a diagram of task state transition according to an embodiment of the present invention.
DESCRIPTION OF EMBODIMENTS To make the objectives, technical solutions, and advantages of the present invention more comprehensible, the following describes the present invention in detail with reference to accompanying drawings.
As shown in FIG. 1, an embodiment of the present invention provides a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching, where the method includes the following steps:
A) Content generation: Because content generation relates to creation of field practice teaching knowledge in a virtual learning environment, complete VR panoramic content acquisition in a practice area by using a combination of aerial photographing and terrestrial acquisition, establish an organization mode of knowledge elements in different layers and areas in the virtual learning environment, and implement optimization of a scene-to-scene jumping effect.
Specifically, the following steps are included: A-a) Data acquisition: To reproduce a field practice teaching process realistically, as shown in FIG. 2, acquire VR panoramic videos according to a teaching requirement and according to different seasons of spring, summer, autumn, and winter, acquire teaching information in a field practice area from two layers— terrestrial observation points and aerial photographing areas, and complete digitization in a VR panoramic video mode. A-a-i) Acquisition of terrestrial observation point information: For terrestrial observation practice content, use a high-definition motion camera group to capture dynamic images from all angles, implement high-density multi-angle field real information acquisition, and obtain related complete panoramic teaching information of a field practice scene. A-a-ii) Acquisition of aerial photographing information by using an unmanned aerial vehicle: For practice content such as observation of an aerial view and vertical distribution of biotopes in a macro-scale field practice area, take 360° photos of biotopes of aerial photographing areas at fixed hovering points (120-500 meters) in the air in different ecotopes by using the unmanned aerial vehicle, to obtain material information of a full field of vision of the practice area.
A-a-iii) Mapping therebetween, where an acquisition point of aerial photographing by the unmanned aerial vehicle needs to be associated with content of a terrestrial observation point, that is, an acquisition area of panoramic aerial photographing is associated with panoramic material information of a plurality of terrestrial observation points acquired by the unmanned aerial vehicle in the area.
A-b) Data organization: Establish an aggregation mode between knowledge elements in different layers and different areas according to a progressive relationship and an association between teaching content; and fuse subject knowledge content and a practice route according to a field practice process routine.
A-b-i) Acquisition point annotation: Because there are a lot of VR panoramic acquisition points, an electronic map may be used as a basic geographic data platform, hotspot and helicopter symbols are used to represent VR panoramic acquisition points of terrestrial observation and aerial photographing by the unmanned aerial vehicle, and the VR panoramic acquisition points are annotated in corresponding spatial positions according to spatial positions. A-b-ii) Vertical association: Represent an association relationship between an acquisition point of aerial photographing and a terrestrial acquisition point in the virtual learning environment by using a pyramid hierarchical structure model, and implement fast switching from a macro scene to a micro object. A-b-iii) Horizontal association: In a sandbox model of a terrain and landform of a practice area, associate content such as ecotope aerial photographing points, terrestrial observation points, and subject knowledge points according to internal logic of knowledge about field practice, to form different survey routes. A-c) Scene transition: In field practice teaching, an association exists between a learning site and content. By fully using associations between different VR scenes, design a solution to optimization of a scene-to-scene jumping and switching effect, to reduce dizziness of a student in a jumping process. A-c-i) Guiding element design: Media navigation information such as a text, symbol, and voice may instruct the student to pay attention to important learning content. FIG. 3 presents a process of designing and adding a hotspot (a planar hotspot and a transparent hot area) in the virtual learning environment. A-c-ii) Scene switching: To implement jumping between a scene 1 and a scene 2 in FIG. 4, first obtain a scene switching point in the scene 1, and then add an indicative icon of the scene 2 to the position as an entry for jumping to the scene 2,
where a pattern of the icon should be designed according to a scene background, and a direction and a name of the scene 2 are marked. A-c-iii) Transition optimization: FIG. 4 presents different processing modes used for differences between different scenes during scene switching, that is, processing modes of using fusion displaying for similar scenes, using gradient displaying for scenes with great differences, and using highlighting for stressing a target scene, to solve a phenomenon of a visual mutation.
B) Fusion of visual and auditory channels: Complete spatial rendering of visual and audible content in the virtual learning environment by using a workflow shown in FIG. 5; and with reference to a head tracking technology, implement synchronous updating of a sound and a picture during moving of a head of the student.
B-a) Spatial rendering of an audiovisual combination: Represent attenuation of a learning object and a background sound source by using a linear volume-distance attenuation method in combination with a binaural positioning audio technology based on a Doppler effect model, and implement a spatial rendering mode applicable to sounds of different objects and different background sound effects in a VR scene. B-a-i) Simulation of multiple sound sources: Simulate static and dynamic point sound sources of corresponding objects in the virtual learning environment according to dynamic change parameters such as position, direction, attenuation, and Doppler effect, and a background sound effect without parameters such as position and speed. B-a-ii) Mixing of the multiple sound sources: To simulate sound generation of objects (such as animals or plants) in a field real environment, obtain sound data of the objects by using a stereo audio processing mode shown in FIG. 6 and really acquiring sample sounds or downloading sounds from an existing audio library, then generate a standard audio file by using audio editing processing software, mutually fuse spectrums of sounds of different objects, and generate a required multi-track VR audio.
B-a-iii}) Sound attenuation effect representation: First, considering an impact of a distance on attenuation in the virtual environment, denote a distance between a head center of the student and a sound source as R, express a maximum audible distance as Ama, express maximum volume of the sound source as Vmax, denote attenuated volume as V, and denote an attenuation formula as a formula 1: » Pons X (1 — —) RE Roy v==0 R> Rows (formula 1) Second, to compensate for attenuation differences of different sound sources in the virtual learning environment, set minimum and maximum attenuation distances for the sound sources.
(a) A minimum attenuation distance corresponds to maximum volume. If a distance between the sound source and the student is shorter than the minimum attenuation distance, the volume does not change any longer.
(b) A maximum attenuation distance corresponds to minimum volume. If the distance is exceeded, a sound generated by the sound source cannot be heard. With reference to the formula 1 and the volume-distance attenuation mode of the sound source (FIG. 7 presents a schematic diagram of an attenuation mode), a sound field reverberation effect of a point sound source is divided into different reverberation areas shown in FIG. 8. In an actual application, according to an attenuation mode of a sound source, for example, a logarithmic attenuation mode used for a directional point sound source, and a linear attenuation mode used for a background sound source, a received reverberation effect is better if the student is closer to the sound source in the scene.
B-a-iiii) Binaural positioning: As shown in FIG. 9, based on parameters such as frequency, phase, and amplitude of the sound source, determine horizontal, front- rear, and vertical directions of a sound in the VR environment, and complete direction positioning of the sound source; then determine distance attributes such as motion parallax, loudness, initial time delay, Doppler effect, and reverberation according to attributes such as distance, position, and terrain environment that affect sound propagation, calculate parameters such as distance, speed, and direction of the sound source relative to the student in the virtual learning environment in real time by using a head-related transfer function according to the direction and distance of the sound source, use a convolution operation to process a signal of the sound source, and generate a stereo sound of the sound source.
B-a-iiiii) Spatial rendering: According to an initial position, a direction, and a motion speed of the student, considering a Doppler effect, and referring to the position, direction, and motion change of the sound source, obtain a motion track of the sound source in the virtual learning environment; and according to a change of the student relative to the sound source, render left and right sound channels with different volume strength. For example, the motion track of the sound source is from right to left, and the distance is increasingly long. In this case, in a motion process, strength of the right sound channel gradually attenuates, and strength of the left sound channel is gradually reduced, until the sound dies away.
B-b) Synchronous audio and video updating: With reference to the head tracking technology, support synchronous updating of a video picture and sound during moving of the head of the student in the virtual learning environment, and implement fusion and presentation of the visual and auditory channels.
B-b-i) Head and ear synchronization: Track a position and posture of the head of the student in the virtual learning environment in real time according to a refreshing frequency of a VR picture, as shown in FIG. 10, redetermine the distance and direction of the sound source relative to the student, and implement synchronous rendering of a picture observed by the student and a sound heard by the student.
B-b-ii) Audiovisual fusion: Lay out to-be-presented content (FIG. 11 is a content layout diagram in a scene) in the virtual learning environment according to a teaching requirement, position an angle of view to corresponding content through head turning of the student, and render different volume in left and right ears according to a distance between the student and the sound source of the content and the directions of the student and the sound source.
B-b-iii) Interference cancellation of the multiple sound sources: For the multiple sound sources in the virtual learning environment, use a sound source attenuation function, and simulate a sound reverberation range model established in step B-a- iii), thereby reducing interference factors of the multiple sound sources. C) Multi-channel interaction design: With respect to a requirement of multi-sensory cooperative interaction of the student in the virtual learning environment, screen, determine, decide, and fuse multi-sensory interactive behavior according to corresponding parameters of interactive objects in an order of interactive task— interactive behavior—interactive experience. FIG. 12 presents a diagram of task state transition in the virtual learning environment.
C-a) Interactive task design: Design an interactive task by using a history of plant growth as an example. The student participates in behavioral interaction in a process of sprouting, blooming, shape changing, and defoliating sequentially, and forms good interactive experience. In this way, a good mechanism is provided for visual, auditory, and tactile channel interaction. C-a-i) Interactive task decomposition: During task design, design a number, an attribute, an objective, an input action, and a task result of a task according to temporal and spatial attributes, and determine a function and a specific process of the task. For example, in a bee pollination process, define number: 01; task attribute: spatial task; task objective: to complete pollination; task action (input): a bee searches for pollen; task result (output): contact with pollen; and task result (output) after a same task action or operation is input in a temporal task associated with the task of this number: buzz.
C-a-ii) Spatial task design: In a process of designing a spatial interactive task, coherence of visual feedback should be always ensured, and the spatial task should also be executed preferentially during execution. In the task of searching for pollen, flying actions of the bee in a bee model should be coherent and natural without jumps, the flying actions of the bee are also first presented in the execution process, and then a sound effect of the temporal task is played. C-a-iii) Temporal task design: Because this type of task has a long time unit, focus on design of auditory channel information, content of which includes background music, a feedback sound effect, and the like, and mainly consider sound information content and accuracy in an output step. The buzz of the bee should be real and accurate when it is output. C-b) Task decision: After multi-channel information is input, first determine a cooperative relationship therebetween, and complete fusion of the input multi- channel information; and then determine a weight and reliability of each piece of output information, accurately convey feedback information to visual, auditory, and tactile sensory organs of the student, and complete multi-channel fusion.
C-b-i) Input information synthesis: According to input information of channels such as visual, auditory, and tactile channels, determine a cooperative relationship between interactive actions during task execution, and complete (concurrently or sequentially execute) synthesis of input information of each channel.
C-b-ii) Multi-channel integration: Decide a weight of the input information (such as gaze interaction, gesture input, and language recognition) of each channel to ensure that the output information is accurately conveyed to the student in the virtual learning environment, which forms a condition for multi-channel integration.
C-b-iiiy Multi-channel fusion: By properly allocating output information of each channel, complete cooperative feedback of each channel in time; because a motion offset of visual imaging occurs in a process of hearing a sound, increase a weight of visual information of the bee according to a theory about multi-channel fusion, and let the student weaken an auditory feeling by highlighting visual dominance; and in sound feedback, design the buzz of the bee from weak to strong, so that sound feedback is reliable in time. The multi-channel fusion comprehensively considering task time and space can enable the student to obtain good interactive experience.
What is not described in detail in this specification pertains to the prior art well known to a person skilled in the art.
The foregoing descriptions are merely exemplary embodiments of the present invention, but are not intended to limit the present invention. Any modification, equivalent replacement, and improvement made without departing from the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (4)

CONCLUSIESCONCLUSIONS 1. Een werkwijze voor meerkanaals fusie en vervaardiging van een virtuele leeromgeving, welke leeromgeving is gericht op praktijkonderwijs, waarbij de werkwijze de stappen omvat van het; A) het content genereren: voltooien van VR panoramische content acquisitie in een oefengebied door gebruik te maken van luchtfotografie en aardse acquisitie, het opzetten van een organisatiemodus van kenniselementen in verschillende lagen en gebieden in een virtuele leeromgeving, en het voltooien van de optimalisatie van een scène-naar scène springeffect; B) het fuseren van visuele en auditieve kanalen: het vertegenwoordigen van verzwakking van een leerobject en een achtergrondgeluidsbron in de virtuele leeromgeving door gebruik te maken van een lineaire volume-afstand verzwakkingsmethode, en het implementeren van een ruimtelijke weergavemodus van geluiden van verschillende objecten in een VR-scène; en met verwijzing naar een technologie voor het traceren van een hoofd, het implementeren van synchroon bijwerken van een panoramische video en geluid tijdens het bewegen van een hoofd van een student; en C) meerkanaals interactieontwerp: met betrekking tot een vereiste van meervoudig sensorische coöperatieve interactie van de student in de virtuele leeromgeving, screenen, bepalen, beslissen, en fuseren van meervoudig sensorisch interactief gedrag volgens corresponderende parameters van interactieve objecten in een volgorde van interactieve taak — interactief gedrag — interactieve ervaring.A method for multi-channel fusion and creation of a virtual learning environment, which learning environment is oriented towards practical education, the method comprising the steps of; A) generating content: completing VR panoramic content acquisition in a practice area by using aerial photography and terrestrial acquisition, setting up an organization mode of knowledge elements in different layers and areas in a virtual learning environment, and completing the optimization of a scene-to-scene jump effect; B) fusing visual and auditory channels: representing attenuation of a learning object and a background sound source in the virtual learning environment by using a linear volume-spacing attenuation method, and implementing a spatial rendering mode of sounds from different objects in a VR scene; and with reference to a head tracking technology, implementing synchronous updating of a panoramic video and sound while moving a head of a student; and C) multi-channel interaction design: regarding a requirement of multiple sensory cooperative interaction of the student in the virtual learning environment, screening, determining, deciding, and fusing multiple sensory interactive behavior according to corresponding parameters of interactive objects in an order of interactive task — interactive behavior — interactive experience. 2. De werkwijze voor meerkanaals fusie en vervaardiging van een virtuele leeromgeving, welke leeromgeving is gericht op praktijkonderwijs volgens conclusie 1, waarbij de content generatie in stap A) specifiek de stappen omvat van: A-a) het verwerven van data: voor het realistisch reproduceren van een praktijkonderwijs in het veld, het verwerven van onderwijsinformatie in een veldpraktijkgebied uit twee lagen — aardse observatiepunten en luchtfotografiegebieden, en het voltooien van de digitalisering in een panoramische VR-videomodus; A-a-i) verwerven van aardse observatiepuntinformatie: voor aardse observatiepraktijkinhoud, gebruikmakend van een hoog-definitie bewegingscameragroep om dynamische afbeeldingen van alle hoeken vast te leggen, echte-informatie-acquisitie met hoge dichtheid en meerdere hoeken te implementeren en volledige materiéle informatie te verkrijgen van een veldoefening-scène;The method for multi-channel fusion and creation of a virtual learning environment, which learning environment is oriented to practical education according to claim 1, wherein the content generation in step A) specifically comprises the steps of: A-a) acquiring data: for realistically reproducing a field practice education, acquiring educational information in a two-layer field practice area — terrestrial observation points and aerial photography areas, and completing the digitization in a VR panoramic video mode; A-a-i) Acquiring terrestrial observation point information: for terrestrial observation practice content, using a high-definition motion camera group to capture dynamic images from all angles, implement high-density multi-angle real-information acquisition, and obtain complete material information from a field exercise -scene; A-a-ii) verwerven van luchtfoto-informatie door gebruik te maken van een onbemand luchtvoertuig: voor het observeren van een luchtaanzicht en verticale verdeling van biotopen in een oefengebied op macroschaal, het maken van foto's van biotopen van luchtfotogebieden in verschillende ecotopen door gebruik te maken van het onbemande luchtvoertuig, om materiële informatie te verkrijgen van een volledig gezichtsveld;A-a-ii) Acquiring aerial information by using an unmanned aerial vehicle: to observe an aerial view and vertical distribution of biotopes in a macro-scale exercise area, taking pictures of biotopes of aerial image areas in different ecotopes by using from the unmanned aerial vehicle, to obtain material information from a full field of view; A-a-iii) daartussen in kaart brengen, waarbij een acquisitiepunt van luchtfotografie door het onbemande luchtvoertuig moet overeenkomen met de inhoud van een aards observatiepunt, dat wil zeggen, wanneer een panoramische luchtfoto-inhoud in één gebied is verkregen, informatiegegevens van een meervoud van aardse observatiepunten overeenkomstig moeten worden verworven;A-a-iii) mapping therebetween, where an aerial photography acquisition point by the unmanned aerial vehicle must correspond to the content of a terrestrial observation point, that is, when a panoramic aerial content is acquired in one area, information data of a plurality of terrestrial observation points must be acquired accordingly; A-b) gegevensorganisatie: tot stand brengen van een aggregatiemodus tussen kenniselementen in verschillende lagen en verschillende gebieden volgens een progressieve relatie en een verband tussen onderwijsinhoud; en het fuseren van vakkennis en een oefenroute volgens een praktijkroutine in het veld;A-b) data organization: establishing a mode of aggregation between knowledge elements in different layers and different areas according to a progressive relationship and a relationship between educational content; and fusing expertise and a practice route according to a field practice routine; A-b-i) acquisitiepuntannotatie: het gebruiken van een elektronische map als een geografisch basisgegevensplatform, gebruik maken van verschillende symbolen om VR-panoramische acquisitiepunten van aardse observaties en luchtfotografie door het onbemande luchtvoertuig weer te geven, en de VR panoramische acquisitiepunten te annoteren op de elektronische map volgens de ruimtelijke posities;A-b-i) acquisition point annotation: using an electronic map as a basic geographic data platform, using various symbols to represent VR panoramic acquisition points of terrestrial observations and aerial photography by the unmanned aerial vehicle, and annotating the VR panoramic acquisition points on the electronic map according to the spatial positions; A-b-ii) verticale associatie: het tot stand brengen van een relatie tussen een luchtfotoscène en een aards acquisitiepunt in de virtuele leeromgeving door gebruik te maken van een piramide-hiërarchisch structuurmodel, en het implementeren van een snelle omschakeling van een macro scene naar een micro-A-b-ii) vertical association: establishing a relationship between an aerial scene and a terrestrial acquisition point in the virtual learning environment by using a pyramid-hierarchical structure model, and implementing a quick switch from a macro scene to a micro - object;object; A-b-iil) horizontale associatie: in een zandbakmodel van een terrein en landvorm van een oefengebied, combineren van ecotoop-luchtfotopunten, aardse observatiepunten, en onderwerpkennispunten volgens een bewegende route van praktijkoefening, om verschillende onderzoek routes te vormen;A-b-iil) horizontal association: in a sandbox model of a terrain and landform of a practice area, combining ecotope aerial photo points, terrestrial observation points, and subject knowledge points according to a moving route of practice practice, to form different research routes; A-c) scène transitie: voor een onderlinge relatie tussen een stageplek en inhoud, het ontwerpen van een oplossing voor optimalisatie van een scène-naar- scène spring- en schakeleffect; A-c-i) sturend elementontwerp, waarbij een interactieve interface van de virtuele leeromgeving verandert van een tweedimensionaal vlak naar een driedimensionaal gebied; en media-navigatie-informatie zoals een tekst, symbool, en stem is ontworpen om de student naar een breder gezichtsveld te leiden; A-c-ii) scènewisseling: volgens de geografisch relatieve positie van twee scènes, het toevoegen van een indicatief icoon van een doelomschakelingspunt aan een vorige scène als invoer voor het springen naar een volgende scène; en A-c-iii) overgangsoptimalisatie: met betrekking tot een groot verschil in beeldkleur, helderheid, of inhoud tijdens het wisselen van scène, gebruik maken van vergelijkbare fusie, gradiëntfusie, en markeringsmodi om een fenomeen van een visuele mutatie op te lossen.A-c) scene transition: for inter-relationship between an internship site and content, design a solution for optimization of scene-to-scene jumping and switching effect; A-c-i) controlling element design, where an interactive interface of the virtual learning environment changes from a two-dimensional plane to a three-dimensional area; and media navigation information such as a text, symbol, and voice is designed to direct the student to a wider field of view; A-c-ii) scene switching: according to the geographically relative position of two scenes, adding an indicative icon of a target switching point to a previous scene as input to jump to the next scene; and A-c-iii) Transition Optimization: With respect to a large difference in image color, brightness, or content during scene switching, using similar fusion, gradient fusion, and highlight modes to resolve a visual mutation phenomenon. 3. De werkwijze voor meerkanaals fusie en vervaardiging van een virtuele leeromgeving, welke leeromgeving is gericht op praktijkonderwijs volgens conclusie 1, waarbij de fusie van visuele en auditieve kanalen in stap B) specifiek de stappen omvat van het: B-a) het ruimtelijke weergeven van een audiovisuele combinatie: het representeren van een verzwakking van een object en een andere achtergrondsgeluidsbron in de virtuele leeromgeving door gebruik te maken van de lineaire volume-afstand verzwakkingsmethode in combinatie met een binaurale positionering audiotechnologie gebaseerd op een Doppler effectmodel, en het implementeren van een ruimtelijke weergavemodus toepasbaar op geluiden van verschillende objecten en verschillende achtergrondgeluidseffecten in de VR- scène; B-a-i) het simuleren van meerdere geluidsbronnen: het simuleren van statische en dynamische puntgeluidsbronnen van corresponderende objecten in de virtuele leeromgeving volgens dynamisch veranderende parameters van positie, richting, verzwakking, en Doppler effect, en een achtergrondgeluidseffect zonder positie- en snelheidsparameters; B-a-ii) het mixen van de meerdere geluidsbronnen: om vocale scènes van objecten in een echte veldomgeving te simuleren, door spectra van geluiden van verschillende objecten onderling te versmelten en een multi-track mix te genereren;The method for multi-channel fusion and creation of a virtual learning environment, which learning environment is aimed at practical training according to claim 1, wherein the fusion of visual and auditory channels in step B) specifically comprises the steps of: B-a) spatially representing a audiovisual combination: representing an attenuation of an object and another background sound source in the virtual learning environment by using the linear volume-distance attenuation method in combination with a binaural positioning audio technology based on a Doppler effect model, and implementing a spatial rendering mode applicable to sounds of different objects and different background sound effects in the VR scene; B-a-i) simulating multiple sound sources: simulating static and dynamic point sound sources of corresponding objects in the virtual learning environment according to dynamically changing parameters of position, direction, attenuation, and Doppler effect, and a background sound effect without position and velocity parameters; B-a-ii) mixing the multiple sound sources: to simulate vocal scenes of objects in a real field environment, by fusing spectra of sounds from different objects together and generating a multi-track mix; B-a-iii) het weergeven van geluidsverzwakkingseffecten: gebruikmakend van een combinatie van een logaritmische verzwakkingsmodus en een lineaire verzwakkingsmodus om een impact van afstands- en richtingsfactoren in de werkelijke veldomgeving op een geluiddempend effect te reproduceren, dat wil zeggen, gebruikmakend van de logaritmische verzwakkingsmodus voor een richtingspunt geluidsbron, en het gebruik van lineaire verzwakkingsmodus voor de achtergrondgeluidsbron; B-a-iiii) het binauraal positioneren: gebaseerd op de beweging, richting, positie, en structuurattributen van de geluidsbron die worden gereflecteerd door de luidheid van het geluid en spectrumkenmerken, het bepalen van een positie van een geluidsbron in de virtuele leeromgeving ten opzichte van een positie van de student volgens een geluidsvoortplanting beginsel; B-a-iiiif) het ruimtelijk weergeven: rekening houdend met een Doppler effect, weergeven van linker en rechter geluidskanalen met verschillende sterktes afhankelijk van de positie van de student, en een richting, een afstand, en een bewegingsverandering van de geluidsbron in de virtuele leeromgeving; B-b) het synchroon audio en video actualiseren: met verwijzing naar de hoofd- traceer technologie, ondersteunen van synchrone actualisering van een videobeeld en -geluid tijdens het verplaatsen van het hoofd van de student in de virtuele leeromgeving, en implementeren van fusie en vervaardiging van de visuele en auditieve kanalen; B-b-i) het hoofd en oor synchroniseren: het realtime traceren van een positie en houding van het hoofd van de student in de virtuele leeromgeving op basis van een ververs-frequentie van een VR-beeld, het herbepalen van de afstand en richting van de geluidsbron ten opzichte van de student, en implementeren van synchrone weergave van een beeld dat door de student wordt waargenomen en een geluid dat wordt gehoord door de student; B-b-ii) het audiovisueel fuseren: het presenteren van een inhoudsscène in de virtuele leeromgeving volgens een onderwijsvereiste, positionering van een kijkhoek ten opzichte van overeenkomstige inhoud door het hoofd van de student te draaien, en het weergeven van het volume van verschillende geluidsbronnen op basis van een afstand tussen de student en het geluidsbron van de inhoud; en B-b-iii) het onderdrukken van interferentie van de meerdere geluidsbronnen: voor de meerdere geluidsbronnen in de virtuele leeromgeving, gebruikmakend van een geluidsbronverzwakkingsfunctie, en simulatie van een geluidsweerkaatsingsbereik,B-a-iii) Reproducing sound attenuation effects: using a combination of a logarithmic attenuation mode and a linear attenuation mode to reproduce an impact of distance and direction factors in the real field environment on a sound attenuation effect, that is, using the logarithmic attenuation mode for a direction point sound source, and using linear attenuation mode for the background sound source; B-a-iiii) binaural positioning: based on the motion, direction, position, and structural attributes of the sound source reflected by the loudness of the sound and spectrum characteristics, determining a position of a sound source in the virtual learning environment relative to a position of the student according to a sound propagation principle; B-a-iiiif) spatial rendering: taking into account a Doppler effect, rendering left and right sound channels with different strengths depending on the position of the student, and a direction, a distance, and a movement change of the sound source in the virtual learning environment; B-b) synchronously updating audio and video: with reference to the head tracking technology, supporting synchronous updating of a video image and sound while moving the student's head in the virtual learning environment, and implementing fusion and fabrication of the visual and auditory channels; B-b-i) synchronizing the head and ear: tracing in real time a position and posture of the student's head in the virtual learning environment based on a refresh rate of a VR image, redetermining the distance and direction of the sound source relative to the student, and implementing synchronous display of an image perceived by the student and a sound heard by the student; B-b-ii) audiovisual fusing: presenting a content scene in the virtual learning environment according to a teaching requirement, positioning a viewing angle relative to corresponding content by turning the student's head, and displaying the volume of different sound sources based on from a distance between the student and the sound source of the content; and B-b-iii) suppressing interference from the multiple sound sources: for the multiple sound sources in the virtual learning environment, using a sound source attenuation function, and simulation of a sound reverberation range, waardoor de storingsfactoren van de meerdere geluidsbronnen worden verminderd.thereby reducing the interference factors from the multiple sound sources. 4. De werkwijze voor meerkanaals fusie en vervaardiging van een virtuele leeromgeving, welke leeromgeving is gericht op praktijkonderwijs volgens conclusie 1, waarbij het meerkanaals interactieontwerp in stap C) specifiek omvat omvat: C-a) interactief taakontwerp: het bereiken van een ordelijke deelname aan interactief gedrag en het vormen van goede interactieve ervaring, waardoor een goed mechanisme wordt verschaft voor meerkanaals interactie; C-a-i) Interactieve taakontleding, waarbij tijdens het taakontwerp, een taak dient te worden opgesplitst in een tijdelijke en een ruimtelijke taak volgens tijd- en ruimtelijke kenmerken van de taak, en een interactieve modus, een doelstelling, een actie, een functie, en een specifiek proces van de taak worden ontworpen volgens de kenmerken van de taak; C-a-ii) ruimtelijke taakontwerp: bij het ontwerpen van een ruimtelijke interactieve taak moet gezorgd worden voor samenhang van visuele feedback, en de ruimtelijke taak moet bij voorkeur ook tijdens de uitvoering worden uitgevoerd; C-a-iii) Tijdelijke taakontwerp: het focussen op ontwerp van auditieve kanaalinformatie, waarvan de inhoud achtergrondmuziek en een terugkoppelingsgeluidseffect omvat, en het hoofzakelijk rekening houden met de geluidinformatie-inhoud en nauwkeurigheid in een outputstap; C-b) taakbeslissing: nadat meerkanaalsinformatie is ingevoerd, het eerst bepalen van een samenwerkingsrelatie daartussen, en het voltooien van de fusering van de ingevoerde meerkanaalsinformatie, en het vervolgens bepalen van een weging en betrouwbaarheid van elk stuk outputinformatie, het nauwkeurig overbrengen van feedbackinformatie aan sensorische organen van de student, en voltooien van meerkanaals fusie; C-b-i) inputinformatiesynthese: volgens de inputinformatie van visuele, auditieve, en tactiele kanalen, bepalen van een samenwerkingsrelatie tussen interactieve acties tijdens taakuitvoering, en voltooiing van synthese van inputinformatie van elk kanaal; C-b-ii) meerkanaals integratie: het bepalen van een weging van de inputinformatie van elk kanaal om ervoor te zorgen dat de outputinformatie nauwkeurig wordt doorgegeven aan de student in de virtuele leeromgeving, wat een voorwaarde vormt voor de meerkanaals integratie; enThe method for multi-channel fusion and creation of a virtual learning environment, which learning environment is aimed at practical education according to claim 1, wherein the multi-channel interaction design in step C) includes specifically: C-a) interactive task design: achieving an orderly participation in interactive behavior and forming good interactive experience, thereby providing a good mechanism for multi-channel interaction; C-a-i) Interactive task parsing, in which, during task design, a task should be split into a temporal and a spatial task according to the temporal and spatial characteristics of the task, and an interactive mode, an objective, an action, a function, and a specific process of the task are designed according to the characteristics of the task; C-a-ii) spatial task design: when designing a spatially interactive task, coherence of visual feedback should be ensured, and the spatial task should preferably also be performed during execution; C-a-iii) Temporary task design: focusing on design of auditory channel information, the content of which includes background music and a feedback sound effect, and mainly consider the sound information content and accuracy in an output step; C-b) task decision: after entering multi-channel information, first determining a cooperative relationship between them, and completing the fusion of the input multi-channel information, and then determining a weighting and reliability of each piece of output information, accurately conveying feedback information to sensory organs of the student, and complete multi-channel fusion; C-b-i) input information synthesis: according to the input information from visual, auditory, and tactile channels, determining a cooperative relationship between interactive actions during task execution, and completing synthesis of input information from each channel; C-b-ii) multi-channel integration: determining a weighting of the input information of each channel to ensure that the output information is accurately passed to the student in the virtual learning environment, which is a prerequisite for the multi-channel integration; and C-b-ili) meerkanaals fusie: door outputinformatie van elk kanaal correct toe te wijzen, de feedbackinformatie nauwkeurig doorgeven aan de sensorische organen van de student, en het voltooien van de meerkanaals fusie, zodat de student een goede interactieve ervaring krijg.C-b-ili) multi-channel fusion: by correctly assigning output information of each channel, accurately transmitting the feedback information to the student's sensory organs, and completing the multi-channel fusion, so that the student can get a good interactive experience.
NL2026359A 2019-12-18 2020-08-27 Method for multi-channel fusion and presentation of virtual learning environment oriented to field practice teaching NL2026359B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911312490.8A CN111009158B (en) 2019-12-18 2019-12-18 Virtual learning environment multi-channel fusion display method for field practice teaching

Publications (2)

Publication Number Publication Date
NL2026359A NL2026359A (en) 2021-08-17
NL2026359B1 true NL2026359B1 (en) 2022-03-18

Family

ID=70116732

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2026359A NL2026359B1 (en) 2019-12-18 2020-08-27 Method for multi-channel fusion and presentation of virtual learning environment oriented to field practice teaching

Country Status (2)

Country Link
CN (1) CN111009158B (en)
NL (1) NL2026359B1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111714889B (en) * 2020-06-19 2024-06-25 网易(杭州)网络有限公司 Sound source control method, device, computer equipment and medium
CN111857370B (en) * 2020-07-27 2022-03-15 吉林大学 Multichannel interactive equipment research and development platform
CN112783320A (en) * 2020-10-21 2021-05-11 中山大学 Immersive virtual reality case teaching display method and system
CN112509151B (en) * 2020-12-11 2021-08-24 华中师范大学 Method for generating sense of reality of virtual object in teaching scene
CN113096252B (en) * 2021-03-05 2021-11-02 华中师范大学 Multi-movement mechanism fusion method in hybrid enhanced teaching scene
CN113408798B (en) * 2021-06-14 2022-03-29 华中师范大学 Barrier-free VR teaching resource color optimization method for people with abnormal color vision
CN114582185A (en) * 2022-03-14 2022-06-03 广州容溢教育科技有限公司 Intelligent teaching system based on VR technique

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005024756A1 (en) * 2003-09-07 2005-03-17 Yiyu Cai Molecular studio for virtual protein lab
CN102637073B (en) * 2012-02-22 2014-12-24 中国科学院微电子研究所 Method for realizing man-machine interaction on three-dimensional animation engine lower layer
CN102945564A (en) * 2012-10-16 2013-02-27 上海大学 True 3D modeling system and method based on video perspective type augmented reality
CN104599243B (en) * 2014-12-11 2017-05-31 北京航空航天大学 A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic
CN106157359B (en) * 2015-04-23 2020-03-10 中国科学院宁波材料技术与工程研究所 Design method of virtual scene experience system
US10586469B2 (en) * 2015-06-08 2020-03-10 STRIVR Labs, Inc. Training using virtual reality
CN106484123A (en) * 2016-11-11 2017-03-08 上海远鉴信息科技有限公司 User's transfer approach and system in virtual reality
CN107817895B (en) * 2017-09-26 2021-01-05 微幻科技(北京)有限公司 Scene switching method and device
CN110427103B (en) * 2019-07-10 2022-04-26 佛山科学技术学院 Virtual-real fusion simulation experiment multi-channel interaction method and system

Also Published As

Publication number Publication date
CN111009158B (en) 2020-09-15
CN111009158A (en) 2020-04-14
NL2026359A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
NL2026359B1 (en) Method for multi-channel fusion and presentation of virtual learning environment oriented to field practice teaching
WO2023045144A1 (en) Method for operating comprehensive stereoscopic teaching field system
CN103035136A (en) Comprehensive electrified education system for teaching of tourism major
CN103258338A (en) Method and system for driving simulated virtual environments with real data
CN112783320A (en) Immersive virtual reality case teaching display method and system
Siang et al. Interactive holographic application using augmented reality EduCard and 3D holographic pyramid for interactive and immersive learning
Innocente et al. A framework study on the use of immersive XR technologies in the cultural heritage domain
Bazzaza et al. Impact of smart immersive mobile learning in language literacy education
Zhang et al. Introducing massive open metaverse course and its enabling technology
Aurelia et al. A survey on mobile augmented reality based interactive storytelling
WALSHE et al. Developing trainee teacher understanding of pedagogy and practice using 360-degree video and an interactive digital overlay.
Virmani et al. Mobile application development for VR in education
Moural et al. User experience in mobile virtual reality: An on-site experience
JP2018049305A (en) Communication method, computer program and device
JP2021086146A (en) Content control system, content control method, and content control program
Baldwin et al. A technical account behind the development of a reproducible low-cost immersive space to conduct applied user testing
Warvik Visualizing climate change in Virtual Reality to provoke behavior change
Nykänen et al. Rendering Environmental Noise Planning Models in Virtual Reality
徐学思 Research on personal tele-Immersion system in 5G background
AU2021103720A4 (en) Virtual reality learning and amusement system based on artificial intelligence (ai) and iot
Purwanto et al. Animal metamorphosis learning media using android Based augmented reality technology
Majernik 3D Virtual Projection and Utilization of Its Outputs in Education of Human Anatomy
Zhou Virtual Reality and Its Application in Environmental Education
Kuna et al. Swot analysis of virtual reality systems in relation to their use in secondary vocational training
JP6733027B1 (en) Content control system, content control method, and content control program