CN111009158A - Virtual learning environment multi-channel fusion display method for field practice teaching - Google Patents

Virtual learning environment multi-channel fusion display method for field practice teaching Download PDF

Info

Publication number
CN111009158A
CN111009158A CN201911312490.8A CN201911312490A CN111009158A CN 111009158 A CN111009158 A CN 111009158A CN 201911312490 A CN201911312490 A CN 201911312490A CN 111009158 A CN111009158 A CN 111009158A
Authority
CN
China
Prior art keywords
sound
channel
learning environment
information
learner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911312490.8A
Other languages
Chinese (zh)
Other versions
CN111009158B (en
Inventor
杨宗凯
钟正
吴砥
吴珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central China Normal University
Original Assignee
Central China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central China Normal University filed Critical Central China Normal University
Priority to CN201911312490.8A priority Critical patent/CN111009158B/en
Publication of CN111009158A publication Critical patent/CN111009158A/en
Priority to NL2026359A priority patent/NL2026359B1/en
Application granted granted Critical
Publication of CN111009158B publication Critical patent/CN111009158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Abstract

The invention belongs to the teaching application field of virtual reality technology, and provides a virtual learning environment multi-channel fusion display method for field practice teaching. According to the characteristics of teaching contents in a VR learning environment, a set of generation methods from data acquisition, knowledge organization and scene switching is established; synchronous updating of the audio-visual channel is realized through a space rendering mode; and evaluating the input and output priorities of each interaction channel to finish the multi-sense cooperative interaction of the learner in the VR learning environment. According to the invention, by adding auditory cues and a multi-channel user interaction discrimination mode, fusion display of the VR learning environment is realized, the sense of reality of the learning environment can be improved, and the immersion experience of participants is improved.

Description

Virtual learning environment multi-channel fusion display method for field practice teaching
Technical Field
The invention belongs to the teaching application field of Virtual Reality (VR) technology, and particularly relates to a Virtual learning environment multichannel fusion display method for field practice teaching.
Background
The field practice is an important practice link for the cultivation of professional talents in the disciplines of geography, geology, biology and the like, and is an important education activity for cultivating students to connect with the reality theoretically, skillfully master basic knowledge and basic skills of the disciplines and improve comprehensive quality and practice innovation capability. However, the prior field practice has a plurality of problems, such as solid taxonomy foundation and lack of teachers with rich field practice experience; practice students are many, the time is short, and teachers are difficult to realize one-to-one guidance; the practice content and the mode are single, the practice basically stays on the level of species identification, specimen collection and manufacturing, and the practice result sharing and interactivity are poor; collecting samples and protecting the environment are in conflict; due to the change of seasons, weather and habitat, a lot of contents which should be completed in field practice are difficult to complete; in field practice, natural disasters such as torrential flood, collapse and debris flow, and a plurality of safety risks such as insect and snake bite, falling injury and summer heat exist, so that the field practice effect of students is influenced. The VR technology is applied to construct a field practice environment, so that the time and space limitations can be broken through, students can develop field practice in an on-the-spot manner without going out, and can study repeatedly and infinitely until an ideal effect is achieved, the VR technology becomes a beneficial supplement for field practice teaching, so that the VR technology can effectively solve a plurality of difficulties faced in field practice and can greatly improve the learning interest of learners. With the rapid popularization of 5G commercial processes, performance bottlenecks such as ultra-high resolution, full viewing angle, low delay and the like required by VR contents can be solved to a great extent. The virtual learning environment for field practice teaching has wide application prospect.
Although the realistic field practice VR learning environment can be quickly constructed by adopting the panoramic mode at present, the actual requirements of field practice teaching cannot be fully met. For example, in the case of biological field practice, the VR panoramic video can capture images of communication behaviors, social behaviors, reproductive behaviors and the like among animals, but it is difficult to convey biological behaviors of the metaphor to learners, such as the sound production mechanism, sound signal characteristics, sound wave reception, processing and recognition of animals, which are not easily conveyed in the images. Through sound synchronous processing of multiple machine positions, learners can feel sound differences in all directions, and the sound source change of panoramic sound simulation can present natural and vivid effects.
Disclosure of Invention
Aiming at the defects or improvement requirements of the prior art, the invention provides a virtual learning environment multi-channel fusion display method for field practice teaching, and provides a scheme of content generation, audio-visual fusion and cooperative interaction around the requirements of field practice teaching virtual simulation teaching. According to the characteristics of teaching contents in a VR learning environment, a set of generation methods from data acquisition, knowledge organization and scene switching are established; synchronous updating of the audio-visual channel is realized through a space rendering mode; and evaluating the input and output priorities of each interaction channel to finish the multi-sense cooperative interaction of the learner in the VR learning environment.
The object of the invention is achieved by the following technical measures.
A virtual learning environment multi-channel fusion display method for field practice teaching comprises three steps of content generation, audio-visual channel fusion and multi-channel interactive design.
(1) Content generation: the method meets the requirement of field practice teaching, adopts a mode of combining aerial photography and ground acquisition to finish VR panoramic content acquisition in a practice area, constructs an organization mode of knowledge elements of different levels and areas in a VR learning environment, and finishes optimization of the jumping effect between scenes.
And (1-1) data acquisition. In order to truly restore the field practice teaching process, the teaching information of the field practice area is collected from two layers of ground observation points and aerial photography areas, and digitization is completed in a VR panoramic video mode.
And (1-1-1) collecting information of ground observation points. Aiming at the practical contents observed on the ground, a high-definition motion camera set is used for capturing full-angle dynamic images, high-density, multi-angle and on-site real-scene information acquisition is realized, and material information of a field practical scene is comprehensively obtained.
(1-1-2) unmanned aerial vehicle aerial photography information acquisition. Aiming at practical contents such as macroscopic large-scale observation of the aerial view and vertical distribution of the habitat of the field practical area, the habitat of the aerial photography area is shot in different ecological areas through the unmanned plane, so that the material information of the full view field is obtained.
(1-1-3) mapping. The unmanned aerial vehicle aerial photography collection point needs to correspond to the content of the ground observation point, namely, the content of the panoramic aerial photography is collected once in one area, and accordingly information data of a plurality of ground observation points need to be collected.
And (1-2) organizing data. Constructing an aggregation mode among knowledge elements of different levels and different areas according to the progressive relation and the relevance of the teaching content; according to the flow line of field practice, the subject knowledge content and the practice route are organically fused.
(1-2-1) marking collection points. The method comprises the steps that an electronic map is used as a basic geographic data platform, different symbols are used for representing VR (virtual reality) panoramic acquisition points for ground observation and unmanned aerial vehicle aerial photography, and the VR panoramic acquisition points and the panoramic acquisition points are marked on the electronic map according to spatial positions;
(1-2-2) longitudinal correlation. Establishing an incidence relation from an aerial photography scene to a ground acquisition point in a VR learning environment by using a pyramid hierarchical structure model, and realizing quick switching from a macroscopic scene to a microscopic object;
(1-2-3) transverse correlation. On a topographic and geomorphic sand table model in a practice area, combining aerial photography points, ground observation points and subject knowledge points in an ecological area according to a moving route of field practice to form different investigation routes.
And (1-3) scene transition. In the field practice teaching, certain relevance exists among learning contents, and an optimization scheme of skipping and conversion effects among scenes is designed according to the mutual relation between a practice field and the contents in order to reduce the phenomenon of dizziness of learners in the VR scene switching process.
(1-3-1) guide element prompting. The interactive interface of the VR learning environment is changed from a two-dimensional plane into a three-dimensional sphere, which exceeds the limitation of the traditional display screen. The learner is guided to have a wider visual field by designing media navigation information such as characters, symbols, voice and the like, so that the learner pays attention to important learning contents.
(1-3-2) scene transition. According to the relative positions of the two scenes in geography, an indication icon of a target transition point is added in the former scene to serve as a jump inlet of the latter scene, and the style of the icon can be designed according to the scene background.
(1-3-3) transition optimization. Aiming at the condition that the difference of picture color, brightness and content is large when the scene is switched, a similar, gradual change fusion and highlight display mode is adopted to solve the visual special change phenomenon.
(2) Audio-visual channel fusion: the attenuation of a learning object and a background sound source in a VR learning environment is expressed by adopting a linear volume-distance attenuation method, so that a space rendering mode of different object sounds in a VR scene is realized; and the synchronous updating of the panoramic video and the sound when the head of the learner moves is realized by combining a head tracking technology.
(2-1) Audio-visual combined spatial rendering. Based on a Doppler effect model, combined with a binaural localization audio technology, a linear volume-distance attenuation method is adopted to represent attenuation of objects and other background sound sources in a VR learning environment, and a space rendering mode suitable for different object sounds and background sound effects in a VR scene is realized.
(2-1-1) multiple sound source simulation. According to dynamic change parameters such as position, direction, attenuation and Doppler effect, static and dynamic point sound sources of corresponding objects and background sound effects without parameters such as position and speed are simulated in the VR learning environment.
(2-1-2) mixing multiple sound sources. In order to simulate the sounding scene of objects (such as animals and plants) in the wild real environment, the frequency spectrums of the sounds of different objects are fused with each other to generate multi-track mixed sound.
(2-1-3) sound attenuation effect representation. The influence of factors such as distance, direction and the like on the sound attenuation effect in the field real environment is restored by adopting an attenuation mode combining logarithm and linearity, for example, a logarithmic attenuation mode is used for a directional point sound source, and a linear distance attenuation mode is adopted for a background sound source.
And (2-1-4) binaural positioning. Based on attributes such as sound source motion, direction, position and structure reflected by characteristics such as loudness and frequency spectrum of sound, the position of the sound source relative to the learner in the VR learning environment is positioned according to the sound propagation principle.
And (2-1-5) space rendering. And (4) by considering Doppler effect, rendering the left and right ear sound channels with different intensities according to the position of the learner in the VR learning environment, the direction of the sound source, the distance and the motion change.
And (2-2) synchronously updating the audio and video. The head tracking technology is combined, synchronous updating of video pictures and sound when the head of a student moves in the VR learning environment is supported, and multi-channel fusion display of visual sense and auditory sense is achieved.
(2-2-1) head and ear synchronization. And tracking the position and the posture of the head of the learner in real time in the VR learning environment according to the refreshing frequency of the VR picture, re-determining the distance and the direction of the sound source relative to the learner, and realizing the rendering pace of the image picture observed by the learner and the heard sound.
(2-2-2) fusion of audio and visual. According to the teaching requirement, content scenes in the VR learning environment are displayed, a learner positions a visual angle on corresponding content through head conversion, and the volume of different sound sources is rendered according to the distance between the learner and the time point.
And (2-2-3) eliminating multi-sound source interference. Aiming at the phenomenon that multiple sound sources exist in a VR learning environment, a sound source attenuation function is adopted to simulate the reverberation range of sound, so that the interference factors of the multiple sound sources are reduced.
(3) And (4) multi-channel interactive design. Aiming at the requirement of multi-sense collaborative interaction of a learner in a VR learning environment, the multi-sense interactive behaviors are screened, judged, decided and fused according to the sequence of interactive tasks, interactive behaviors and interactive experience and corresponding parameters of an interactive object.
And (3-1) designing an interactive task. By reasonably designing the interaction tasks, the ordered participation of the interaction behaviors is achieved, and good interaction experience is formed, so that a good mechanism is provided for multi-channel interaction.
(3-1-1) interactive task decomposition. During task design, time and space attributes of tasks need to be decomposed, and task interaction modes, purposes, actions and specific flows are designed according to the attributes of the tasks.
And (3-1-2) space task design. Compared with other traditional multimedia learning resources, visual enhancement is the advantage of VR learning environment, the consistency of visual feedback is always ensured in the design process of the space interaction task, and the space task is preferentially executed during execution.
(3-1-3) time task design. The tasks have longer time units, so the design of auditory channel information is emphasized, the content comprises background music, feedback sound effect and the like, and the information content and the accuracy of sound in an output link are mainly considered.
And (3-2) task decision. After multi-channel interactive information is input, firstly judging the cooperation relationship between the multi-channel interactive information and the multi-channel interactive information to complete the fusion of the multi-channel information input; and then, according to the weight and reliability judgment of each output information, the feedback information is accurately transmitted to the sense organ of the learner, and the multi-channel fusion is completed.
(3-2-1) Synthesis of input information. And judging the cooperation relation of each piece of information in the task execution according to the input information of channels such as visual, audio, tactile and the like, and finishing the synthesis of the information input of each channel.
(3-2-2) multichannel integration. And performing weight decision according to the input information of each channel to ensure that the output information is accurately transmitted to learners in the VR learning environment, so as to form a condition of multi-channel integration.
(3-2-3) multichannel fusion. Through reasonably allocating the output information of each channel, the feedback information is accurately transmitted to each sense organ of the learner, the fusion of multiple channels is completed, and the learner obtains good interactive experience.
The invention provides a virtual learning environment multi-channel fusion display method for field practice teaching, and provides a scheme of content generation, audio-visual fusion and cooperative interaction around the requirement of field practice teaching virtual simulation teaching. According to the characteristics of teaching contents in a VR learning environment, a set of generation methods from data acquisition, knowledge organization and scene switching are established; synchronous updating of the audio-visual channel is realized through a space rendering mode; and evaluating the input and output priorities of each interaction channel to finish the multi-sense cooperative interaction of the learner in the VR learning environment. According to the invention, by adding auditory cues and a multi-channel user interaction discrimination mode, fusion display of the VR learning environment is realized, the sense of reality of the learning environment can be improved, and the immersion experience of participants is improved.
Drawings
FIG. 1 is a flow chart of a virtual learning environment multi-channel fusion display method for field practice teaching in the embodiment of the invention.
Fig. 2 is a schematic diagram of a corresponding relationship between an unmanned aerial vehicle aerial area and a ground observation point in the embodiment of the invention.
Fig. 3 is a schematic view of a scene hotspot design flow in the embodiment of the present invention.
Fig. 4 is a schematic view of a scene change design flow in the embodiment of the present invention.
Fig. 5 is a schematic diagram illustrating an audio and video synchronization process in a VR learning environment according to an embodiment of the present invention.
FIG. 6 is a flow chart illustrating a stereo audio processing according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of a distance attenuation pattern of sound source volumes in an embodiment of the present invention.
Fig. 8 is a schematic diagram of dividing the reverberation effect of the sound field of the point sound source in the embodiment of the present invention.
Fig. 9 is a schematic diagram of a binaural localization model in an embodiment of the invention.
FIG. 10 is a schematic diagram of the spatial relationship between a learner and a listener in an embodiment of the present invention.
Fig. 11 is a schematic diagram of learning content layout in the embodiment of the present invention.
FIG. 12 is a task state transition diagram in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention provides a virtual learning environment multichannel fusion display method for field practice teaching, including the following steps:
(1) and (4) generating the content.
The content generation relates to the construction of field practice teaching knowledge in a VR learning environment, VR panoramic content collection in a practice area is completed in a mode of combining aerial photography and ground collection, the organization of knowledge elements in different levels and areas in the VR learning environment is constructed, and the skipping effect among scenes is optimized.
The method specifically comprises the following steps:
and (1-1) data acquisition. In order to truly restore the field practice teaching process, as shown in fig. 2, VR panoramic videos are collected according to teaching requirements in different seasons of spring, summer, autumn and winter, teaching information of a field practice area is collected from two layers of a ground observation point and an aerial photography area, and digitization is completed in a VR panoramic video mode.
And (1-1-1) collecting information of ground observation points. Aiming at the practical contents observed on the ground, a high-definition motion camera set is used for capturing full-angle dynamic images, high-density, multi-angle and on-site real-scene information acquisition is realized, and relevant panoramic teaching information of field practical scenes is comprehensively obtained.
(1-1-2) unmanned aerial vehicle aerial photography information acquisition. Aiming at practical contents such as observation of aerial view and vertical distribution of the habitat of the large-scale field practical area, the habitat of the aerial photographing area is shot at 360 degrees by hovering the unmanned aircraft at high altitude in different ecological areas for fixed points (120 + 500 meters), so that the material information of the full field of view of the practical area is obtained.
(1-1-3) mapping. The unmanned aerial vehicle aerial photography collection point needs to be associated with the content of the ground observation point, namely a panoramic aerial photography collection area is associated with panoramic material information of a plurality of ground observation points collected in the area.
And (1-2) organizing data. Constructing an aggregation mode among knowledge elements of different levels and different areas according to the progressive relation and the relevance of the teaching content; according to the flow line of field practice, the subject knowledge content and the practice route are organically fused.
(1-2-1) marking collection points. Because more VR panorama points are collected, an electronic map can be selected as a basic geographic data platform, hot spots and helicopter symbols are used for representing VR panorama collecting points for ground observation and unmanned aerial vehicle aerial photography, and the VR panorama collecting points are marked to corresponding spatial positions according to the spatial positions;
(1-2-2) longitudinal correlation. The pyramid hierarchical structure model is used for representing the incidence relation between aerial photography and ground acquisition points in the VR learning environment, and the rapid switching from a macroscopic scene to a microscopic object can be realized.
(1-2-3) transverse correlation. On a topographic and geomorphic sand table model in a practice area, contents such as aerial photography points, ground observation points, subject knowledge points and the like in an ecological area are associated according to the internal logic of relevant knowledge in field practice to form different investigation routes.
And (1-3) scene transition. In the field practice teaching, a certain correlation exists between a learning field and contents; the relevance of different VR scenes is fully utilized, an optimization scheme of switching and transition effects among the scenes is designed, and the dizzy feeling of learners in the jumping process can be reduced.
(1-3-1) guide element prompting. The media navigation information such as characters, symbols, voice and the like can guide the learner to pay attention to the important learning content. Fig. 3 illustrates the design and addition process of hotspots (planar hotspots and transparent hotspots) in a VR learning environment.
(1-3-2) scene transition. In order to realize the jump between the scenes 1 and 2 in fig. 4, a transition switching point is firstly obtained in the scene 1, and then an indication icon of the scene 2 is added at the position to be used as a jump inlet of the scene 2, wherein the pattern of the icon is designed according to the scene background and indicates the direction and the name of the scene 2.
(1-3-3) transition optimization. Fig. 4 shows that, when a scene is switched, different processing modes are adopted for different scene differences, that is, a processing scheme that a similar scene adopts fusion display, a scene with a larger difference adopts gradual display, and a push display is adopted to emphasize a target scene is used for solving a visual characteristic phenomenon.
(2) The audio-visual channels are merged.
Completing spatial rendering of visualized and audible content in the VR learning environment using the workflow shown in fig. 5; and the synchronous updating of the sound and the picture when the head of the learner moves is realized by combining the head tracking technology.
(2-1) Audio-visual combined spatial rendering. Based on a Doppler effect model, combined with a binaural localization audio technology, a linear volume-distance attenuation method is adopted to represent the attenuation of a learning object and a background sound source, and a space rendering mode suitable for different object sounds and background sound effects in a VR scene is realized.
(2-1-1) multiple sound source simulation. According to dynamic change parameters such as position, direction, attenuation and Doppler effect, static and dynamic point sound sources of corresponding objects and background sound effects without parameters such as position and speed are simulated in the VR learning environment.
(2-1-2) mixing multiple sound sources. In order to simulate the sound production of objects (such as animals and plants) in a real field environment, a three-dimensional audio processing mode shown in fig. 6 is adopted, sound data of the objects are acquired by actually acquiring sample sounds or downloading the sample sounds from an existing audio library, standard audio files are generated by audio editing processing software, and then the frequency spectrums of the sounds of different objects are fused with each other to generate the required multi-track VR audio.
(2-1-3) sound attenuation effect representation. The influence of distance on attenuation is considered in a virtual environment, the distance between the learner's head center and the sound source is recorded as R, and the maximum audible distance is expressed as RmaxThe maximum volume of the sound source is represented as VmaxAnd the attenuated volume is denoted as V, the attenuation formula is denoted as formula 1:
Figure BDA0002324917340000121
secondly, to compensate for the attenuation differences of different sound sources in the VR learning environment, minimum and maximum attenuation distances need to be set for them:
(a) the minimum attenuation distance corresponds to the maximum volume, is closer to the learner, and the volume is not changed any more;
(b) the maximum attenuation distance corresponds to the minimum volume above which the sound emitted by the source is not heard.
Combining equation 1 and the volume-distance attenuation pattern of the sound source (fig. 7 shows a schematic diagram of an attenuation pattern), the sound field of the point sound source is divided into different reverberation regions as shown in fig. 8. In practical applications, according to the attenuation mode of the sound source, for example, a logarithmic attenuation mode is used for the directional point sound source, and the linear distance attenuation mode is adopted for the background sound source, so that the closer the learner is to the sound source in the scene, the better the reverberation effect is received.
And (2-1-4) binaural positioning. As shown in fig. 9, determining the horizontal, front-back and vertical directions of the sound in the VR environment based on the frequency, phase, amplitude, etc. of the sound source, and completing the directional localization of the sound source; and determining attributes such as motion visual difference, loudness, initial time delay, Doppler effect and reverberation equidistance according to attributes influencing sound propagation such as distance, position and terrain environment, calculating parameters such as distance, speed and direction of the sound source relative to learners in a VR learning environment in real time by using a head-related transfer function according to the direction and distance of the sound source, and processing signals of the sound source by using convolution operation to generate stereo sound of the sound source.
And (2-1-5) space rendering. According to the initial position, direction and movement speed of a learner, under the condition of considering Doppler effect, combining the position, direction and movement change of a sound source, obtaining the movement tracks of the learner in a VR learning environment, and rendering the sound volumes with different intensities on the left and right ear sound channels according to the relative change of the learner and the sound source, wherein for example, the movement tracks of the sound source are from right to left, and the distance is more and more far, so that the intensity of the right ear sound channel is gradually attenuated in the movement process; the left ear channel gradually weakens in intensity until the sound disappears.
And (2-2) synchronously updating the audio and video. The head tracking technology is combined, synchronous updating of video pictures and sound when the head of a student moves in the VR learning environment is supported, and multi-channel fusion display of visual sense and auditory sense is achieved.
(2-2-1) head and ear synchronization. And tracking the position and the posture of the head of the learner in real time in the VR learning environment according to the refreshing frequency of the VR picture, and re-determining the distance and the direction of the sound source relative to the learner as shown in fig. 10, so that the image picture observed by the learner is consistent with the rendering pace of the heard sound.
(2-2-2) fusion of audio and visual. According to teaching requirements, display contents (fig. 11 is a content layout diagram of a certain scene) are arranged in a VR learning environment, when a learner rotates the head, a visual angle can be positioned on the corresponding contents, and different volumes are rendered at left and right ears according to the distance and the direction of the learner from a sound source at the position of the contents.
And (2-2-3) eliminating multi-sound source interference. Aiming at the phenomenon that multiple sound sources exist in the VR learning environment, a sound source attenuation function is adopted to simulate the sound reverberation range model constructed in the step (2-1-3), so that the interference factors of the multiple sound sources are reduced.
(3) And (4) multi-channel interactive design. Aiming at the requirement of multi-sensory cooperative interaction of a learner in the VR learning environment, the multi-sensory cooperative interaction is screened, judged, decided and fused according to the sequence of interaction tasks, interaction behaviors and interaction experience and corresponding parameters of an interaction object, and a task state transition diagram in the VR learning environment is shown in FIG. 12.
And (3-1) designing an interactive task. The plant growth history is taken as an example to design an interactive task, and learners orderly participate in behavior interaction in the processes of sprouting, flowering, deformation and falling off to form good interactive experience, so that a good mechanism is provided for visual, audio and tactile channel interaction.
(3-1-1) interactive task decomposition. During task design, according to time and space attributes, the serial number, attributes, purposes, input actions and task results of tasks are separated and designed, and the role and the specific flow of the tasks are determined. For example, in the bee pollination process, the number can be defined as follows: 01, task attribute: space tasks, task purposes: complete pollination, task action (input): bee search pollen, task results (output): contacting with pollen; after the same input task action operation, the task results (output) of the time tasks related in the serial task are as follows: a buzzing sound.
And (3-1-2) space task design. The consistency of visual feedback is always ensured in the design process of the space interaction task, the space task is preferably executed during execution, the flying action of the bee model in the bee pollen searching task is coherent and natural, jumping cannot occur, the flying action of the bee is firstly shown during the execution process, and then the sound effect of the time task is played.
(3-1-3) time task design. The task has a longer time unit, so that the design of auditory channel information is emphasized, the content comprises background music, feedback sound effect and the like, the information content and the accuracy of sound in an output link are mainly considered, and the bee hum is really and accurately output.
And (3-2) task decision. After multi-channel interactive information is input, firstly judging the cooperation relationship between the multi-channel interactive information and the multi-channel interactive information to complete the fusion of the multi-channel information input; and then, according to the weight and reliability judgment of each output information, the feedback information is accurately transmitted to the audio-visual sense of touch of the learner, and the fusion of multiple channels is completed.
(3-2-1) Synthesis of input information. And judging the cooperation relation of each interaction action in the task execution according to the input information of channels such as visual, audio, tactile and the like, and completing the synthesis (concurrent or sequential execution) of the information input of each channel.
(3-2-2) multichannel integration. And judging according to the weight of input information (such as gaze interaction, gesture input and language recognition) of each channel to ensure that the output information is accurately transmitted to learners in the VR learning environment, so as to form a condition of multi-channel integration.
(3-2-3) multichannel fusion. By reasonably allocating the output information of each channel, the cooperative feedback of each channel needs to be completed in time, the visual imaging can shift in space due to the auditory sounding process, the weight of the bee visual information needs to be enhanced according to the relevant theory of multi-channel fusion, and the occurrence position of the auditory sense of a learner is weakened through the prominent visual dominance; meanwhile, in the aspect of sound feedback, the buzzing sound of the bees is designed from weak to strong, so that the buzzing sound of the bees has certain reliability in time. The multi-channel fusion of task time and space can be comprehensively considered, so that a learner can obtain good interactive experience.
Details not described in the present specification belong to the prior art known to those skilled in the art.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (4)

1. A virtual learning environment multi-channel fusion display method for field practice teaching is characterized by comprising the following steps:
(1) content generation: adopting a mode of combining aerial photography and ground acquisition to finish VR panoramic content acquisition of a practice area, constructing an organization mode of knowledge elements of different levels and areas in a VR learning environment, and finishing optimization of a jumping effect between scenes;
(2) audio-visual channel fusion: the attenuation of a learning object and a background sound source in a VR learning environment is expressed by adopting a linear volume-distance attenuation method, so that a space rendering mode of different object sounds in a VR scene is realized; the head tracking technology is combined to realize the synchronous update of the panoramic video and the sound when the head of the learner moves;
(3) and the multi-channel interactive design is used for screening, judging, deciding and fusing multi-sensory interactive behaviors according to the sequence of interactive tasks, interactive behaviors and interactive experiences and corresponding parameters of interactive objects aiming at the requirement of multi-sensory cooperative interaction of a learner in the VR learning environment.
2. The virtual learning environment multi-channel fusion display method for field practice teaching according to claim 1, wherein the content generation in the step (1) specifically comprises the following steps:
(1-1) data acquisition, wherein in order to truly restore the field practice teaching process, teaching information of a field practice area is acquired from two layers of a ground observation point and an aerial photography area, and digitization is finished in a VR panoramic video mode;
(1-1-1) collecting ground observation point information, capturing full-angle dynamic images by using a high-definition motion camera set aiming at ground observation practice contents, realizing high-density, multi-angle and real-scene information collection, and comprehensively obtaining material information of a field practice scene;
(1-1-2) acquiring aerial photography information of the unmanned aerial vehicle, and shooting the aerial photography area habitat in different ecological areas through the unmanned aerial vehicle aiming at observing aerial views and vertical distribution conditions of the macro-scale field practice area habitat so as to acquire material information of a full view field;
(1-1-3) mapping the ground observation points and the unmanned aerial vehicle aerial photography acquisition points, wherein the unmanned aerial vehicle aerial photography acquisition points need to correspond to the contents of the ground observation points, namely, one area acquires the contents of the panoramic aerial photography once, and correspondingly, information data of a plurality of ground observation points need to be acquired;
(1-2) organizing data, and constructing an aggregation mode among knowledge elements of different levels and different areas according to the progressive relation and the relevance of teaching contents; according to the flow line of field practice, the subject knowledge content is fused with the practice route;
(1-2-1) marking acquisition points, namely, using an electronic map as a basic geographic data platform, representing VR panoramic acquisition points for ground observation and unmanned aerial vehicle aerial photography by using different symbols, and marking the VR panoramic acquisition points on the electronic map according to spatial positions;
(1-2-2) longitudinally associating, namely constructing an association relation from an aerial scene to a ground acquisition point in a VR learning environment by using a pyramid hierarchical structure model, and realizing rapid switching from a macroscopic scene to a microscopic object;
(1-2-3) transversely associating, combining aerial photography points, ground observation points and subject knowledge points in an ecological area on a topographic and geomorphic sand table model in a practice area according to a moving route of field practice to form different investigation routes;
(1-3) scene transition, wherein an optimization scheme of skipping and conversion effects between scenes is designed according to the mutual relation between the practice site and the content;
(1-3-1) prompting a guide element that an interactive interface of a VR learning environment is changed into a three-dimensional sphere from a two-dimensional plane; guiding the learner to have a wider visual field by designing characters, symbols and voice media navigation information;
(1-3-2) scene transition, adding an indication icon of a target transition point in the former scene as a jump entry of the latter scene according to the geographical relative positions of the two scenes;
(1-3-3) transition optimization, wherein a similar, gradual change fusion and highlight display mode is adopted to solve the visual special change phenomenon aiming at the condition that the difference between the color, the brightness and the content of the picture is large when the scene is switched.
3. The virtual learning environment multi-channel fusion display method for field practice teaching as claimed in claim 1, wherein the audio-visual channel fusion in step (2) specifically comprises the following steps:
(2-1) audio-visual combined space rendering, based on a Doppler effect model, combined with a binaural localization audio technology, and adopting a linear volume-distance attenuation method to represent attenuation of objects and other background sound sources in a VR learning environment, so as to realize a space rendering mode suitable for different object sounds and background sound effects in a VR scene;
(2-1-1) multi-sound source simulation, according to the position, direction, attenuation and Doppler effect dynamic change parameters, simulating static and dynamic point sound sources of corresponding objects and background sound effects without position and speed parameters in a VR learning environment;
(2-1-2) mixing multiple sound sources, and fusing frequency spectrums of sounds of different objects with each other to generate multi-track mixed sound in order to simulate the sound production scene of the objects in the real field environment;
(2-1-3) sound attenuation effect representation, wherein the influence of distance and direction factors on the sound attenuation effect in a field real environment is restored by adopting an attenuation mode combining logarithm and linearity, a logarithm attenuation mode is used for a directional point sound source, and a linear distance attenuation mode is adopted for a background sound source;
(2-1-4) binaural localization, based on the loudness of the sound, and the sound source motion, direction, position and structural attributes reflected by the spectral features, localizing the position of the sound source in the VR learning environment relative to the learner according to the sound propagation principle;
(2-1-5) space rendering, wherein Doppler effect is considered, and left and right ear sound channels with different intensities are rendered according to the position of a learner in a VR learning environment, the direction of a sound source, the distance and the motion change;
(2-2) synchronously updating audio and video, combining a head tracking technology, supporting synchronous updating of video pictures and sound when the head of a learner moves in a VR learning environment, and realizing multi-channel fusion display of visual sense and auditory sense;
(2-2-1) synchronizing the head and the ears, tracking the position and the posture of the head of a learner in a VR learning environment in real time according to the refreshing frequency of a VR picture, re-determining the distance and the direction of a sound source relative to the learner, and realizing the rendering pace of an image picture observed by the learner and the heard sound;
(2-2-2) integrating visual and audio, showing a content scene in the VR learning environment according to a teaching requirement, positioning a visual angle on corresponding content by a learner through head conversion, and rendering the volume of different sound sources according to the distance between the learner and the time point;
and (2-2-3) eliminating multi-sound-source interference, and aiming at the phenomenon that multiple sound sources exist in the VR learning environment, simulating the reverberation range of sound by adopting a sound source attenuation function, thereby reducing the multi-sound-source interference factors.
4. The virtual learning environment multi-channel fusion display method for field practice teaching according to claim 1, wherein the multi-channel interactive design in step (3) specifically comprises:
(3-1) designing an interaction task, so that the orderly participation of interaction behaviors is achieved, and good interaction experience is formed, thereby providing a good mechanism for multi-channel interaction;
(3-1-1) interactive task decomposition, wherein during task design, time and space attributes of a task need to be decomposed, and task interactive modes, purposes, actions and specific flows are designed according to the attributes of the task;
(3-1-2) space task design, wherein the consistency of visual feedback is always ensured in the space interaction task design process, and the space task is preferentially executed during execution;
(3-1-3) time task design, wherein the design of auditory channel information is emphasized, the content comprises background music and feedback sound effect, and the information content and accuracy of sound in an output link are mainly considered;
(3-2) task decision, after multi-channel interactive information is input, firstly judging the cooperation relationship between the multi-channel interactive information and the multi-channel interactive information to complete the fusion of the multi-channel information input; then, according to the weight and reliability judgment of each output information, the feedback information is accurately transmitted to the sense organ of the learner, and the multi-channel fusion is completed;
(3-2-1) synthesizing input information, namely judging the cooperative relationship of each piece of information in the task execution according to the input information of the visual, auditory and tactile channels to complete the synthesis of the input information of each channel;
(3-2-2) performing multi-channel integration, and performing weight decision according to input information of each channel to ensure that output information is accurately transmitted to learners in the VR learning environment to form a multi-channel integration condition;
(3-2-3) multi-channel fusion, wherein feedback information is accurately transmitted to each sense organ of the learner by reasonably allocating output information of each channel, so that the multi-channel fusion is completed, and the learner obtains good interactive experience.
CN201911312490.8A 2019-12-18 2019-12-18 Virtual learning environment multi-channel fusion display method for field practice teaching Active CN111009158B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911312490.8A CN111009158B (en) 2019-12-18 2019-12-18 Virtual learning environment multi-channel fusion display method for field practice teaching
NL2026359A NL2026359B1 (en) 2019-12-18 2020-08-27 Method for multi-channel fusion and presentation of virtual learning environment oriented to field practice teaching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911312490.8A CN111009158B (en) 2019-12-18 2019-12-18 Virtual learning environment multi-channel fusion display method for field practice teaching

Publications (2)

Publication Number Publication Date
CN111009158A true CN111009158A (en) 2020-04-14
CN111009158B CN111009158B (en) 2020-09-15

Family

ID=70116732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911312490.8A Active CN111009158B (en) 2019-12-18 2019-12-18 Virtual learning environment multi-channel fusion display method for field practice teaching

Country Status (2)

Country Link
CN (1) CN111009158B (en)
NL (1) NL2026359B1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111714889A (en) * 2020-06-19 2020-09-29 网易(杭州)网络有限公司 Sound source control method, sound source control device, computer equipment and medium
CN111857370A (en) * 2020-07-27 2020-10-30 吉林大学 Multichannel interactive equipment research and development platform
CN112783320A (en) * 2020-10-21 2021-05-11 中山大学 Immersive virtual reality case teaching display method and system
CN113096252A (en) * 2021-03-05 2021-07-09 华中师范大学 Multi-movement mechanism fusion method in hybrid enhanced teaching scene
CN113408798A (en) * 2021-06-14 2021-09-17 华中师范大学 Barrier-free VR teaching resource color optimization method for people with abnormal color vision
WO2022121645A1 (en) * 2020-12-11 2022-06-16 华中师范大学 Method for generating sense of reality of virtual object in teaching scene

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005024756A1 (en) * 2003-09-07 2005-03-17 Yiyu Cai Molecular studio for virtual protein lab
CN102637073A (en) * 2012-02-22 2012-08-15 中国科学院微电子研究所 Method for realizing man-machine interaction on three-dimensional animation engine lower layer
CN102945564A (en) * 2012-10-16 2013-02-27 上海大学 True 3D modeling system and method based on video perspective type augmented reality
CN106157359A (en) * 2015-04-23 2016-11-23 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual scene experiencing system
US20170039881A1 (en) * 2015-06-08 2017-02-09 STRIVR Labs, Inc. Sports training using virtual reality
CN106484123A (en) * 2016-11-11 2017-03-08 上海远鉴信息科技有限公司 User's transfer approach and system in virtual reality
CN104599243B (en) * 2014-12-11 2017-05-31 北京航空航天大学 A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic
CN107817895A (en) * 2017-09-26 2018-03-20 微幻科技(北京)有限公司 Method for changing scenes and device
CN110427103A (en) * 2019-07-10 2019-11-08 佛山科学技术学院 A kind of virtual reality fusion emulation experiment multi-modal interaction method and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005024756A1 (en) * 2003-09-07 2005-03-17 Yiyu Cai Molecular studio for virtual protein lab
CN102637073A (en) * 2012-02-22 2012-08-15 中国科学院微电子研究所 Method for realizing man-machine interaction on three-dimensional animation engine lower layer
CN102945564A (en) * 2012-10-16 2013-02-27 上海大学 True 3D modeling system and method based on video perspective type augmented reality
CN104599243B (en) * 2014-12-11 2017-05-31 北京航空航天大学 A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic
CN106157359A (en) * 2015-04-23 2016-11-23 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual scene experiencing system
US20170039881A1 (en) * 2015-06-08 2017-02-09 STRIVR Labs, Inc. Sports training using virtual reality
CN106484123A (en) * 2016-11-11 2017-03-08 上海远鉴信息科技有限公司 User's transfer approach and system in virtual reality
CN107817895A (en) * 2017-09-26 2018-03-20 微幻科技(北京)有限公司 Method for changing scenes and device
CN110427103A (en) * 2019-07-10 2019-11-08 佛山科学技术学院 A kind of virtual reality fusion emulation experiment multi-modal interaction method and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111714889A (en) * 2020-06-19 2020-09-29 网易(杭州)网络有限公司 Sound source control method, sound source control device, computer equipment and medium
CN111857370A (en) * 2020-07-27 2020-10-30 吉林大学 Multichannel interactive equipment research and development platform
CN111857370B (en) * 2020-07-27 2022-03-15 吉林大学 Multichannel interactive equipment research and development platform
CN112783320A (en) * 2020-10-21 2021-05-11 中山大学 Immersive virtual reality case teaching display method and system
WO2022121645A1 (en) * 2020-12-11 2022-06-16 华中师范大学 Method for generating sense of reality of virtual object in teaching scene
CN113096252A (en) * 2021-03-05 2021-07-09 华中师范大学 Multi-movement mechanism fusion method in hybrid enhanced teaching scene
CN113408798A (en) * 2021-06-14 2021-09-17 华中师范大学 Barrier-free VR teaching resource color optimization method for people with abnormal color vision

Also Published As

Publication number Publication date
CN111009158B (en) 2020-09-15
NL2026359A (en) 2021-08-17
NL2026359B1 (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN111009158B (en) Virtual learning environment multi-channel fusion display method for field practice teaching
CN103035136A (en) Comprehensive electrified education system for teaching of tourism major
CN104484327A (en) Project environment display method
CN112783320A (en) Immersive virtual reality case teaching display method and system
Hod et al. Distributed spatial Sensemaking on the augmented reality sandbox
Birt et al. Using virtual and augmented reality to study architectural lighting
Hussein Integrating augmented reality technologies into architectural education: application to the course of landscape design at Port Said University
CN115525144A (en) Multi-object interaction equipment based on virtual reality and interaction method thereof
Sleipness et al. Impacts of immersive virtual reality on three-dimensional design processes: opportunities and constraints for landscape architecture studio pedagogy
Cho et al. Challenges and opportunities for virtual learning in college geology
CN108831216A (en) True three-dimensional virtual simulation interactive method and system
Chen Research on the design of intelligent music teaching system based on virtual reality technology
Murodillaevich et al. Improve teaching and learning approach 3D primitives with Virtual and Augmented Reality
Burgos et al. Virtual reality for the enhancement of structural health monitoring experiences in historical constructions
Moural et al. User experience in mobile virtual reality: An on-site experience
Chi et al. Design and Implementation of Virtual Campus System based on VR Technology
Purwanto et al. Animal metamorphosis learning media using android Based augmented reality technology
KR20160062276A (en) System for providing edutainment contents based on STEAM
Malvika et al. Insights into the impactful usage of virtual reality for end users
Flores et al. Rebuilding cultural and heritage space of corregidor island using GPS-based augmented reality
Brooks The Applications of Immersive Virtual Reality Technologies for Archaeology
Kolås et al. Interactive virtual field trips
Tsvetkova et al. A complex workflow for development of interactive and impressive educational content using capabilities of animated augmented reality trends
Cumberbatch et al. Using extended reality technology in science education
Dowling Place-based journalism, aesthetics, and branding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant