CN113506377A - Teaching training method based on virtual roaming technology - Google Patents

Teaching training method based on virtual roaming technology Download PDF

Info

Publication number
CN113506377A
CN113506377A CN202110842535.3A CN202110842535A CN113506377A CN 113506377 A CN113506377 A CN 113506377A CN 202110842535 A CN202110842535 A CN 202110842535A CN 113506377 A CN113506377 A CN 113506377A
Authority
CN
China
Prior art keywords
teaching
model
virtual
scene
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110842535.3A
Other languages
Chinese (zh)
Inventor
姜振军
孙亚文
黄盈
申雨泽
许胡宇
姜梦洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHEJIANG JIANGSHAN TRANSFORMER CO LTD
Original Assignee
ZHEJIANG JIANGSHAN TRANSFORMER CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHEJIANG JIANGSHAN TRANSFORMER CO LTD filed Critical ZHEJIANG JIANGSHAN TRANSFORMER CO LTD
Priority to CN202110842535.3A priority Critical patent/CN113506377A/en
Publication of CN113506377A publication Critical patent/CN113506377A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4023Decimation- or insertion-based scaling, e.g. pixel or line decimation

Abstract

The invention discloses a teaching and training method based on a virtual roaming technology, which comprises the following steps: the method comprises the steps that a 3DSMAX is used for building a model, the built model is led into a Unity environment to build a three-dimensional environment, a camera is used for showing scene pictures of a teaching or training area, and a collision detection module is used for detecting collision between a character model and other models; constructing a character model roaming path in a three-dimensional environment by utilizing an interpolation spline linear curve; an intelligent dialogue system is constructed in a three-dimensional environment by utilizing a convolutional neural network and is used for analyzing and processing voice information input by a user to obtain voice reply information; the user side inputs instruction information to control the character model to move to a specified position, the character model is controlled to move to the specified position based on the roaming path, and the intelligent dialogue system replies voice information input by the user to complete a set checkpoint task and realize virtual reality teaching or training. The method has better sense of reality and interactive experience.

Description

Teaching training method based on virtual roaming technology
Technical Field
The invention belongs to the field of virtual reality application and education and teaching, and particularly relates to a teaching and training method based on a virtual roaming technology.
Background
The virtual reality technology has the advantages of creating a vivid situation, creating an immersive learning environment, visualizing abstract knowledge, innovating an education mode and an evaluation method and the like, has three main characteristics of immersion, interactivity and imagination, creates a virtual situation in the virtual reality environment, enables students or employees to have a feeling of being personally on the scene, and can improve the experience of the students or the employees in learning.
In recent years, with the upgrading of people's Education ideas, various Education games are developed, and the created Education games are more excellent by combining with Virtual reality technology, wherein documents Berns A, Gonzalez-Pardo A, Camocho D.Game-like Language Learning in 3-D Virtual Environments [ J ]. Computers & Eduition, 2013,60(1): 210-.
The documents Gabriel Culbertson, Erik Andersen, Walker White, Daniel Zhang, and malt Jung.Crystallize an Immersize, collagen Game for Second Language learning in CSCW 2016.Crystallize discloses a 3D English learning Game that interacts with a head mounted display Ocular Rift, gamification via a mission system.
The literature is oriented to research on oral English learning and evaluation in SELL-Corpus and VR environments [ D ]. university of east China, 2019. the disclosed English learning environment can support virtual English learning environments connected with devices such as HTC Vive, Gear VR, PC and mobile phones, is used for English training in specific scenes such as interviews and speeches, supports multi-device synchronization, and provides an interactive communication platform for learners with different device requirements.
In the virtual reality game development process, a game master role of a third life is designed, the game master role roams in a scene through the master role, the game experience of students or employees can be increased, the similar learning experience in the real world is sensed, the application analysis of the Enscape for SketchUp virtual roaming technology in the exhibition hall design discloses two modes of automatic roaming and manual roaming, the free movement in the scene is realized, a mode of controlling the movement of the students or the employees through a mouse and a keyboard is constructed, an education roaming system with strong immersion and interactivity is completed, the game experience is increased, and the learning effect is improved.
At present, numerous virtual reality immersive games appear to help students or employees to learn in a virtual reality environment, but experience and interactive experience in current virtual reality education need to be improved, so that an application for helping students or employees to learn mathematics and an application for enterprise training in the virtual reality environment need to be designed urgently, an education level and a good interaction mode which accord with learning habits of students or employees and enterprise training are designed, and education or training design is better achieved through technology.
Disclosure of Invention
The invention provides a virtual reality teaching and training method with higher sense of reality and interactive experience.
A teaching and training method based on virtual roaming technology comprises the following steps:
s1: utilizing 3DSMAX to construct a character model, a local model and a teaching or training area overall model, importing the constructed character model, the local model and the teaching or training area overall model into a Unity environment to construct a three-dimensional environment of a teaching or training area, describing each scene in the three-dimensional environment, and accessing a camera and a collision detection module in the three-dimensional environment, wherein the camera is used for displaying a scene picture of the teaching or training area, and the collision detection module is used for detecting collision between the character model and other models;
s2: constructing a character model roaming path in a three-dimensional environment by using an interpolation spline linear curve, setting control points of the interpolation spline linear curve, determining a plurality of interpolation points among the control points through line segments among the control points, and establishing a roaming path with smooth lines through the plurality of interpolation points;
s3: an intelligent dialogue system is constructed in a three-dimensional environment by utilizing a convolutional neural network and is used for analyzing and processing voice information input by a user to obtain voice reply information;
s4: when the intelligent interactive personality model is applied, the personality model is manually controlled to move to a first appointed position or the visual angle is switched through instruction information input by a user side, the personality model is automatically controlled to move to a second appointed position based on a roaming path, and voice information input by a user is replied through an intelligent interactive system so as to complete a set checkpoint task in a three-dimensional environment, so that virtual reality teaching or training is realized.
According to the invention, the user can selectively control the character model by designing automatic roaming and manual roaming, and through the arrangement of the camera and the collision detection module, the unreality is prevented from being caused by collision or passing through an object, so that the user experience is increased, and the good interactive experience is increased and the reality and the experience are increased by manually controlling the movement of the character model.
The method for constructing the intelligent dialogue system in the three-dimensional environment by utilizing the convolutional neural network comprises the following specific steps of:
utilize convolutional neural network to establish intelligent dialogue system, intelligent dialogue system includes the input layer, the convolution layer, pooling layer and categorised layer, the input layer is used for splicing a plurality of word vectors of input, output word concatenation vector, the convolution layer is used for utilizing the wave filter to extract the characteristic to input concatenation word vector, and gather the characteristic that the wave filter was extracted through pooling layer, output word convolution eigenvector, the categorised layer includes full tie-layer and softmax function, full tie-layer fuses input word convolution eigenvector, then will fuse the eigenvector input softmax function after classifying, obtain the voice response information based on the classification result.
Local features existing in statements can be extracted through convolution operation, and the length corresponding to the generated feature vectors can be ensured to be fixed through final pooling operation, so that vectors with different lengths generated by different filters are avoided. The intelligent dialogue system needs to pay attention to semantic representation (ambiguity, spoken language and diversity of natural language), statement logicality, consistency of front and back contents, interactivity of communication and the like. The key of the intelligent dialogue system is that better user product experience, more standardized dialogue system components, more reasonable system evaluation modes and more independent learning and updating capabilities are provided at present.
Dividing voice information input by a user into a plurality of words through a word segmentation tool, converting the words into a plurality of word vectors through a word vector model, and inputting the word vectors into an input layer.
The specific steps of determining the interpolation between the control points through the straight line between the control points are as follows:
the method includes the steps of inserting line segments with equal intervals between control points, sequentially calculating differences S between the control points and the line segments until a preset threshold S is reached (S needs to determine the size of the threshold according to coordinates and actual distances of objects in a scene, and d is set to be 5 in the scene), and obtaining a plurality of interpolation points between the control points. The viewpoint is set as V (namely the position of the head of the virtual character), a point M with the distance of d along the movement direction on the sight line is taken, the V and the M are connected to form a line segment, and the larger the number of the line segments in the same distance is judged through operation, the smoother the curve is (the line segment from the point to the point is the shortest, but the larger the number of the line segments between the two points is, the higher the smoothness of the curve is judged).
The specific steps of enabling the camera to move along with the character model are as follows:
setting the camera as a sub-object of the character model, and adjusting the observation angles of the camera and the character model to keep the observation angles consistent and the relative positions fixed.
Setting a main character object Capsule in a scene, dragging a camera object MainCamera to be a sub-object of the main character object Capsule, and adjusting the observation angles of the Capsule and the MainCamera. The moving principle of the camera: the camera and the main character are relatively fixed in the scene and can move along with the main character, and the camera rotates along with the movement of the main character by influencing the input of the mouse, so that the camera moves along with the character to achieve the feeling of being personally on the scene.
The manual control character model movement and visual angle switching are completed based on receiving instruction information sent by an interactive button, the interactive button comprises a keyboard and a mouse, wherein the keyboard is used for sending four direction movement instructions of forward movement, backward movement, leftward movement and rightward movement of the character model, and the mouse is used for sending a visual angle adjusting instruction of character rotation, zooming-in and zooming-out.
By utilizing an interaction mode of a mouse and a keyboard, students or employees can move and interact in a scene, and in a virtual scene, good interaction experience is designed, so that the sense of reality and the sense of experience are increased.
The specific steps of depicting each scene in the three-dimensional environment are as follows: the method comprises the steps of drawing a scene in a three-dimensional environment of a virtual space, mapping a checkpoint panoramic image collected from a server to the three-dimensional environment in a texture mode to obtain a three-dimensional virtual scene, performing corresponding texture mapping, illumination mapping and shadow mapping on corresponding various scenes, and adding media information into the three-dimensional virtual scene, wherein the media information comprises video, characters, pictures and animation information.
The camera is set as a child object of the character model, so that the camera moves along with the character model to display a scene picture of a teaching or training area.
When designing a game level, the game level mainly develops around four aspects to help students or employees to learn the mathematics subject, wherein the aspects are as follows: recognizing numbers, finding rules, comparing sizes and learning graphs, and during design of a checkpoint, for example, comparing the finding rules checkpoint: first closing: the method comprises the following steps of (1) exploring a rule in a scene by using objects with the same color and the same size; and secondly, closing: exploring a graph rule through objects with the same color and different sizes; and the fourth step of searching for the rules of students or employees through objects with different colors and different sizes.
For example, in recognizing circle and square checkpoints, selecting a checkpoint for recognizing a square, entering the checkpoint, and watching a learning video for recognizing the square in the first stage;
secondly, freely drawing an object on a blackboard, and freely writing by adopting a Print in 3D plug-in;
thirdly, performing 2D tracing on the screen by adopting a 2D tracing method, and drawing along the screen track;
and the fourth step is to search all the square objects in the scene, and when the student or the staff finds the square objects, when touching the object, the voice prompt of 'Tai bang', 'good and beautiful' can be played randomly, so that students or employees can find the correct object in the scene, when the students or the employees find all the square objects in the room scene, the objects found by the students or the employees can be displayed and placed on the table for displaying, meanwhile, the voice playing of the audio frequency that the ball circle is round and the watermelon circle is round is carried out, the interactive experience of students or employees is increased, meanwhile, a 2D/3D switching button appears in the room to help students or employees to observe objects at different angles, so that the three-dimensional thinking of the students or the employees is increased, the objects are respectively observed from a two-dimensional angle and a three-dimensional angle, and the virtual world can be better combined with the real world;
and the fifth step of enabling students or employees to splice round objects into the graphs, simulating real world building block stacking, and enabling the students or the employees to understand the graphs better.
Compared with the prior art, the invention has the beneficial effects that:
the method provided by the invention can enable students or employees to carry out roaming experience through two modes of automatic roaming and manual roaming, the roaming mode is designed, objects can be automatically detected, and when the objects are collided, collision detection is carried out, so that the users can feel personally on the scene.
Through the design of the checkpoint, the button of "2D 3D switches" is designed, has more real experience sense in the scene, not only watches the object in two-dimensional plane, also in the aspect of study, promotes user's experience, and at the angle of real life, the object is observed to the multi-angle, solves the problem that same object switches two and three-dimensional angle in the twinkling of an eye at the technical level.
Drawings
FIG. 1 is a block diagram of an overall teaching or training method based on virtual roaming techniques, as utilized in an exemplary embodiment;
FIG. 2 is a level design framework diagram employed by the embodiments;
FIG. 3 is a level classification diagram employed by the embodiments;
FIG. 4 is a flow chart of an intelligent system used in the embodiments.
Detailed Description
In order to increase the experience of students or employees in traditional learning through a virtual reality technology, an education platform based on the virtual reality technology is developed, the education platform comprises 4 classes and 10 scenes, the mathematical capacity of the students or the employees is improved, the scenes are created through vivid scenes, and the experience of being personally on the scene is provided for the students or the employees of enterprises.
The overall frame diagram is shown in fig. 1, and the method adopted by the specific embodiment mainly comprises the following steps:
step one, environment preparation and model making
According to the invention, a Unity environment needs to be configured, a model and a scene are firstly constructed by using 3DSMAX in the Unity environment, 4 types of scenes are required to be constructed, including four types of learning of rule finding scenes, size knowing scenes, digital knowing scenes and figure knowing in mathematical teaching or training, each type of scene comprises different level checks, the scene and the model need to be designed in advance in the stage of designing the level checks and manufacturing the model, and 3DSMAX is adopted for modeling.
1. Drawing a two-dimensional line by using an aiming point tool through designing a plan and importing the plan into AutoCAD, reasonably zooming the two-dimensional line according to an obtained data set, and placing the two-dimensional line in a corresponding position;
2. guiding the adjusted two-dimensional line into 3DSMAX, and performing re-dotting by using a line drawing tool by using two-dimensional capture in the process;
3. after the three-dimensional model is built, a chartlet material is set, a pre-selected texture material graph is described, the collected picture is cut, processed by contrast, saturation and exposure by using image processing software Photoshop, the processed chartlet is endowed to the material in a material editor for adjustment, the sense of reality of the model is enhanced, and the FBX file is exported from 3DSMAX of the manufactured model.
4. And importing the manufactured FBX file into Unity, and performing proportion adjustment and position placement.
Step two, the camera moves along with the role
After modeling is carried out through 3DSMAX, modeling of a scene and a role is included, a main camera is set to move along with movement of the role, when the position of a person changes, the camera moves along with the movement, the camera is set as a sub-object of a main character object and moves along with the movement of the person, a first person camera is set as a sub-object of a third person camera, the main character object Capsule is set in the scene, the camera object MainCamera is dragged to be a sub-object of the main character object Capsule, and the observation angles of the Capsule and the MainCamera are adjusted. The moving principle of the camera: the position relatively fixed of camera and principal angle in the scene, it can to follow the principal angle and remove, comes to rotate through influencing mouse input for the role motion is followed to the camera, in order to reach the experience of being personally on the scene, and wherein, the core code is as follows:
Figure BDA0003179560440000061
step three, roaming technology in the virtual scene (roaming technology completed by which algorithm, innovation point is)
After the operations of model import, camera movement following the main character and the like are completed, a virtual roaming browsing interface is further set, a navigation chart and an interactive button are manufactured, a frame is completed through the built model, scene, browsing interface, navigation chart and interactive button, the corrected model scene is spliced, the character is placed at a proper position to control the main character to move, jump and the like, the automatic roaming operation is set for the character, and in the scene, a route point is set, namely a route node passed by the main character in the scene moving process.
Through the interactive mode of design mouse and keyboard control, the control personage freely walks in the scene, wherein, the keyboard can control and advance, retreat, four directions left, right side, mouse can control the personage and rotate, draw near, draw far etc. operation, specific function implementation process is including establishing walking camera and setting up collision detection, experience with guaranteeing real scene, press through keyboard and mouse and take place the instruction, the backstage receives and presses the instruction, trigger corresponding command, can carry out operations such as rotatory removal, in browsing the in-process, add media such as video, characters, pictures, animation, make user interface at last, let student or staff receive people's interaction in the scene, possess real experience and feel.
The roaming core code is as follows:
Figure BDA0003179560440000071
the roaming path is established by establishing control points, inserting line segments at equal intervals between the control points and sequentially calculating the difference value to calculate the roaming path with smooth lines. The method includes the steps of inserting line segments with equal intervals between control points, sequentially calculating differences S between the control points and the line segments until a preset threshold S is reached (S needs to determine the size of the threshold according to coordinates and actual distances of objects in a scene, and d is set to be 5 in the scene), and obtaining a plurality of interpolation points between the control points.
The specific steps for establishing the smooth roaming path of the line are as follows:
and (3) setting a viewpoint as V, wherein the viewpoint V is the position of the head of the virtual character, taking a point M with a distance d along the motion direction on the viewpoint, connecting the V and the M to form a line segment, and judging that the more the line segment number is in the same distance, the smoother the curve is (the line segment from the point to the point is shortest, but the more the line segment number between the two points is, the higher the smoothness degree of the curve is).
Coat sdu ═ 24; // frame per second
Vector Q1, Q2, Q; v/Q1, Q2 is the control point, Q is the interpolation point of Q1, Q1;
float D, T, V; V/D is the distance of flight, T is the time of flight, V is the altitude of flight;
int n; number of interpolation points for Q1, Q2
D=[(Q2.x-Q1.x)2+(Q2.y-Q1.y)2+(Q2.z-Q1.z)2]/2;
T=D/V;
N=T*sdu;
(I-1, I < N; I + +// calculating the number of interpolation points
{
Q.x=Q1.x+(Q2.x-Q1.x)/(n-1);
Q.x\y=Q1.y+(Q2.y-Q1.y)/(n-1);
Q.z=Q1.z+(Q2.z-Q1.z)/(n-1);
}
The interpolation point is calculated and used as a new color interpolation calculation point, and the second point is set as an observation point of a broken line segment in the interpolation process, so that a smooth and fluctuation-free path roaming curve can be obtained in the roaming process, and a lens can generate larger fluctuation when a corner is met or a curved path is passed in the roaming process.
Step four, designing the level, wherein a level design frame diagram is shown in figure 2
When designing a game level, the game level mainly develops around four aspects to help students or employees to learn the mathematics subject, wherein the aspects are as follows: recognizing numbers, finding rules, comparing sizes and learning graphs, as shown in fig. 3, in the design of a checkpoint, for example, comparing finding rules checkpoint: first closing: the method comprises the following steps of (1) exploring a rule in a scene by using objects with the same color and the same size; and secondly, closing: exploring a graph rule through objects with the same color and different sizes; and the fourth step of searching for the rules of students or employees through objects with different colors and different sizes.
For example, in recognizing circle and square checkpoints, selecting a checkpoint for recognizing a square, entering the checkpoint, and watching a learning video for recognizing the square in the first stage;
secondly, freely drawing an object on a blackboard, and freely writing by adopting a Print in 3D plug-in;
thirdly, performing 2D tracing on the screen by adopting a 2D tracing method, and drawing along the screen track;
and the fourth step is to search all the square objects in the scene, and when the student or the staff finds the square objects, when touching the object, the voice prompt of 'Tai bang', 'good and beautiful' can be played randomly, so that students or employees can find the correct object in the scene, when the students or the employees find all the square objects in the room scene, the objects found by the students or the employees can be displayed and placed on the table for displaying, meanwhile, the voice playing of the audio frequency that the ball circle is round and the watermelon circle is round is carried out, the interactive experience of students or employees is increased, meanwhile, a 2D/3D switching button appears in the room to help students or employees to observe objects at different angles, so that the three-dimensional thinking of the students or the employees is increased, the objects are respectively observed from a two-dimensional angle and a three-dimensional angle, and the virtual world can be better combined with the real world;
and the fifth step of enabling students or employees to splice round objects into the graphs, simulating real world building block stacking, and enabling the students or the employees to understand the graphs better.
Step five, voice recognition design
The intelligent dialogue system converts the content input by the user into text through speech semantic recognition, and the work flow of the intelligent dialogue system is shown in fig. 4 through natural language understanding, dialogue state tracking, dialogue strategy, natural language generation and speech synthesis tools. Common intelligent dialog systems are divided into three categories: the system comprises a task type dialogue system, a chatting type dialogue system and a knowledge question-answering type system, wherein the question-answering system feeds back corresponding answers to a user through a simple and efficient information retrieval mode; the chatting dialogue system is usually used in open field question answering, and usually replies by template matching based on keywords and searching a sentence with the highest matching degree based on a database; the task-based dialogue system replies in a special scene, and the method comprises three steps of field identification, intention understanding and slot value matching, wherein the field of a user is firstly identified, such as the fields of airline ticket purchasing, supermarket shopping and the like, then the intention understanding is carried out, namely, a classifier is adopted to carry out user question classification, and finally the slot value is filled through a typical sequence marking model.
Natural language understanding, as an important link of an intelligent dialogue system, can directly affect the quality of language processing in the later period. Natural language understanding has undergone template matching based methods, machine learning based methods, deep learning based methods, including Convolutional Neural Network (CNN) based and long short term memory neural network (LSTM) based, attention-based spoken language understanding methods. The design adopts a deep learning method to generate a brand-new reply, which is an important part of a dialogue system, and a Convolutional Neural Network (CNN) is commonly used in statement modeling and comprises an input layer, a convolutional layer, a pooling layer and a classification layer.
(1) An input layer: firstly, processing a sentence input by a user by using a related word segmentation tool, converting each word into a d-dimensional vector x _ i by using a word vector model, and splicing word vectors generated by conversion. The calculation process is shown as equation (1), where x _ i ∈ R ^ d represents the ith word in the sentence,
Figure BDA0003179560440000101
the join operator is shown with dimension d.
Figure BDA0003179560440000102
(2) And (3) rolling layers: taking h words between the ith word and the (i + h +1) th word as word window sizes, performing matrix calculation by using a filter w with h x k dimensions to obtain corresponding convolution characteristics c _ i, wherein the calculation process is shown as formula (2), wherein s represents a nonlinear activation function, b e R represents a bias term, and w e R ^ hk represents that the dimension of the filter is h x k.
ci=s(w·xi:i+h-1+b) (2)
(3) The filter is then translated on the corresponding statement, and the corresponding word window has { x _ (1: h,) x _ (2: h +1,) x _ (3: h +2, …,) x _ (n-h +1: n) }, so that the generated feature map matrix is as shown in formula (3), where c _ i ∈ R ^ (n-h +1) indicates that in the feature map matrix, the dimension of the ith feature vector c _ i is n-h + 1.
c=[c1,c2,…,cn-h+1] (3)
(4) A pooling layer: the layer has the function of aggregating all feature sets generated after convolution, such as a maximum pooling method, the calculation process is shown as formula (4), and the maximum value is selected from a corresponding feature map matrix generated by a convolution kernel w ∈ R ^ hk
Figure BDA0003179560440000103
Figure BDA0003179560440000104
(5) softmax layer: after the convolutional layer uses m filters, through pooling operation, the final eigenvector representation will be generated
Figure BDA0003179560440000105
Inputting the feature vector into a full-link layer, and finally obtaining the probability distribution condition of the prediction label (the prediction label is 0.5) by using a softmax function. The calculation method is shown in formula (5):
yi=softmax(Wz·z+b) (5)
local features existing in statements can be extracted through convolution operation, and the length corresponding to the generated feature vectors can be ensured to be fixed through final pooling operation, so that vectors with different lengths generated by different filters are avoided. The intelligent dialogue system needs to pay attention to semantic representation (ambiguity, spoken language and diversity of natural language), statement logicality, consistency of front and back contents, interactivity of communication and the like. The key of the intelligent dialogue system is that better user product experience, more standardized dialogue system components, more reasonable system evaluation modes and more independent learning and updating capabilities are provided at present.

Claims (9)

1. A teaching and training method based on virtual roaming technology is characterized by comprising the following steps:
s1: utilizing 3DSMAX to construct a character model, a local model and a teaching or training area overall model, importing the constructed character model, the local model and the teaching or training area overall model into a Unity environment to construct a three-dimensional environment of a teaching or training area, describing each scene in the three-dimensional environment, and accessing a camera and a collision detection module in the three-dimensional environment, wherein the camera is used for displaying a scene picture of the teaching or training area, and the collision detection module is used for detecting collision between the character model and other models;
s2: constructing a character model roaming path in a three-dimensional environment by using an interpolation spline linear curve, setting control points of the interpolation spline linear curve, determining a plurality of interpolation points among the control points through line segments among the control points, and establishing a roaming path with smooth lines through the plurality of interpolation points;
s3: an intelligent dialogue system is constructed in a three-dimensional environment by utilizing a convolutional neural network and is used for analyzing and processing voice information input by a user to obtain voice reply information;
s4: when the intelligent interactive personality model is applied, the personality model is manually controlled to move to a first appointed position or the visual angle is switched through instruction information input by a user side, the personality model is automatically controlled to move to a second appointed position based on a roaming path, and voice information input by a user is replied through an intelligent interactive system so as to complete a set checkpoint task in a three-dimensional environment, so that virtual reality teaching or training is realized.
2. The teaching and training method based on virtual roaming technology as claimed in claim 1, wherein the specific steps of constructing the intelligent dialogue system in the three-dimensional environment by using the convolutional neural network are as follows:
utilize convolutional neural network to establish intelligent dialogue system, intelligent dialogue system includes the input layer, the convolution layer, pooling layer and categorised layer, the input layer is used for splicing a plurality of word vectors of input, output word concatenation vector, the convolution layer is used for utilizing the wave filter to extract the characteristic to input concatenation word vector, and gather the characteristic that the wave filter was extracted through pooling layer, output word convolution eigenvector, the categorised layer includes full tie-layer and softmax function, full tie-layer fuses input word convolution eigenvector, then will fuse the eigenvector input softmax function after classifying, obtain the voice response information based on the classification result.
3. The teaching and training method based on virtual roaming technology as claimed in claim 2, wherein a word vector is constructed based on the inputted voice information, the voice information inputted by the user is divided into a plurality of words by a word segmentation tool, the plurality of words are converted into a plurality of word vectors by a word vector model, and the plurality of word vectors are inputted to the input layer.
4. The teaching and training method based on the virtual roaming technology as claimed in claim 1, wherein the specific steps of determining the interpolation between the control points through the straight line between the control points are as follows:
and inserting line segments with equal intervals among the control points, and sequentially calculating the difference between the control points and the line segments until the difference meets a preset first threshold value to obtain a plurality of interpolation points among the control points.
5. The teaching and training method based on virtual roaming technology as claimed in claim 1, wherein the specific steps of establishing a roaming path with smooth lines are as follows:
and setting a viewpoint as V, wherein the viewpoint V is the position of the head of the virtual character, taking a point M with a distance d along the movement direction on the viewpoint, connecting the V and the M to form a line segment, and obtaining a smooth roaming path curve when the number of the line segments meets a second threshold value in the same distance by operation judgment.
6. The method as claimed in claim 1, wherein the manual control of the movement of the character model and the switching of the view angle is performed based on receiving command information transmitted from interactive buttons, the interactive buttons include a keyboard and a mouse, wherein the keyboard is used for transmitting four directions of movement commands of the character model, such as forward, backward, leftward and rightward, and the mouse is used for transmitting a command of adjusting the view angle, such as rotation, zoom-in and zoom-out of the character model.
7. The method for teaching and training based on virtual roaming technology as claimed in claim 1, wherein the steps of depicting each scene in the three-dimensional environment are as follows:
the method comprises the steps of drawing a scene in a three-dimensional environment of a virtual space, mapping a checkpoint panoramic image collected from a server to the three-dimensional environment in a texture mode to obtain a three-dimensional virtual scene, performing texture mapping, illumination mapping and shadow mapping on the three-dimensional virtual scene, and adding media information to the three-dimensional virtual scene, wherein the media information comprises video, characters, pictures and animation information so as to increase the sense of reality of the three-dimensional environment.
8. The method as claimed in claim 1, wherein the camera is configured as a child of the character model, so that the camera moves along with the character model to show a scene of the teaching area.
9. The method for instructional training based on virtual roaming technology as claimed in claim 8, wherein the specific steps of moving the camera following the character model are:
setting the camera as a sub-object of the character model, and adjusting the observation angles of the camera and the character model to keep the observation angles consistent and the relative positions fixed.
CN202110842535.3A 2021-07-26 2021-07-26 Teaching training method based on virtual roaming technology Pending CN113506377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110842535.3A CN113506377A (en) 2021-07-26 2021-07-26 Teaching training method based on virtual roaming technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110842535.3A CN113506377A (en) 2021-07-26 2021-07-26 Teaching training method based on virtual roaming technology

Publications (1)

Publication Number Publication Date
CN113506377A true CN113506377A (en) 2021-10-15

Family

ID=78014721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110842535.3A Pending CN113506377A (en) 2021-07-26 2021-07-26 Teaching training method based on virtual roaming technology

Country Status (1)

Country Link
CN (1) CN113506377A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022644A (en) * 2021-11-05 2022-02-08 华中师范大学 Bit selection method for multiple virtualized bodies in teaching space
CN115617174A (en) * 2022-10-21 2023-01-17 吉林大学 Method for constructing interactive virtual exhibition hall
TWI800124B (en) * 2021-11-26 2023-04-21 輔仁大學學校財團法人輔仁大學 A virtual reality interactive system that uses virtual reality to simulate children's daily life training

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022644A (en) * 2021-11-05 2022-02-08 华中师范大学 Bit selection method for multiple virtualized bodies in teaching space
TWI800124B (en) * 2021-11-26 2023-04-21 輔仁大學學校財團法人輔仁大學 A virtual reality interactive system that uses virtual reality to simulate children's daily life training
CN115617174A (en) * 2022-10-21 2023-01-17 吉林大学 Method for constructing interactive virtual exhibition hall
CN115617174B (en) * 2022-10-21 2023-09-22 吉林大学 Method for constructing interactive virtual exhibition hall

Similar Documents

Publication Publication Date Title
CN113506377A (en) Teaching training method based on virtual roaming technology
CN108776773B (en) Three-dimensional gesture recognition method and interaction system based on depth image
CN107423398A (en) Exchange method, device, storage medium and computer equipment
CN106325509A (en) Three-dimensional gesture recognition method and system
CN108389249A (en) A kind of spaces the VR/AR classroom of multiple compatibility and its construction method
CN110211222A (en) A kind of AR immersion tourism guide method, device, storage medium and terminal device
CN115933868A (en) Three-dimensional comprehensive teaching field system of turnover platform and working method thereof
CN117055724A (en) Generating type teaching resource system in virtual teaching scene and working method thereof
Zhang et al. The Application of Folk Art with Virtual Reality Technology in Visual Communication.
Wang et al. A survey of museum applied research based on mobile augmented reality
Wang et al. Wuju opera cultural creative products and research on visual image under VR technology
CN117115917A (en) Teacher behavior recognition method, device and medium based on multi-modal feature fusion
Steed Defining interaction within immersive virtual environments
CN112764530A (en) Ammunition identification method based on touch handle and augmented reality glasses
Putra et al. Designing translation tool: Between sign language to spoken text on kinect time series data using dynamic time warping
CN111078008B (en) Control method of early education robot
CN114779942A (en) Virtual reality immersive interaction system, equipment and method
CN114120443A (en) Classroom teaching gesture recognition method and system based on 3D human body posture estimation
Yi et al. AR system for mold design teaching
CN110070777B (en) Huchizhui fish skin painting simulation training system and implementation method
WO2024077518A1 (en) Interface display method and apparatus based on augmented reality, and device, medium and product
Yao et al. Multidimensional Computer Aided Animation Design Based on Virtual Reality Technology
Zhang et al. Virtual Museum Scene Design Based on VRAR Realistic Interaction under PMC Artificial Intelligence Model
Chen et al. An Application of Somatosensory Interaction for 3D Virtual Experiments
Yang et al. Application of augmented reality technology in smart cartoon character design and visual modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination