CN117475115A - Path guiding system in virtual-real fusion environment and working method thereof - Google Patents

Path guiding system in virtual-real fusion environment and working method thereof Download PDF

Info

Publication number
CN117475115A
CN117475115A CN202311497689.9A CN202311497689A CN117475115A CN 117475115 A CN117475115 A CN 117475115A CN 202311497689 A CN202311497689 A CN 202311497689A CN 117475115 A CN117475115 A CN 117475115A
Authority
CN
China
Prior art keywords
user
algorithm
sight
path
teaching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311497689.9A
Other languages
Chinese (zh)
Other versions
CN117475115B (en
Inventor
钟正
康宸
曾啸宇
习江涛
吴砥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central China Normal University
Original Assignee
Central China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central China Normal University filed Critical Central China Normal University
Priority to CN202311497689.9A priority Critical patent/CN117475115B/en
Publication of CN117475115A publication Critical patent/CN117475115A/en
Application granted granted Critical
Publication of CN117475115B publication Critical patent/CN117475115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the field of teaching application of virtual reality, and provides a path guiding system and a working method in a virtual-real fusion environment. The invention realizes the user positioning in the virtual-real fusion space, estimates the space heading and the predicted dynamic obstacle track according to the field of view range of the user, plans the path of the walking path, and guides the path of the user in the teaching space of the ladder classroom by guiding prompt. The invention can aggregate physical-virtual-social ternary space information and is beneficial to improving the effect of education and teaching.

Description

Path guiding system in virtual-real fusion environment and working method thereof
Technical Field
The invention belongs to the field of teaching application of virtual reality, and particularly relates to a path guiding system and a working method in a virtual-real fusion environment.
Background
The wide application of the general large model of artificial intelligence represented by ChatGPT, the generated Artificial Intelligence (AIGC) is rapidly penetrating into each link of the education field, and the application breadth and depth thereof will exceed the past extent. In the virtual-real fusion teaching scene, the path guiding technology can help the user to realize positioning, navigation and obstacle avoidance in the virtual world in the real environment, so as to reach the destination in a more easy and efficient manner or complete specific teaching tasks. According to the space position and the view field range of the user, the AIGC can provide functions of intelligent obstacle avoidance, path planning, path optimization, real-time navigation and the like. The virtual-real fusion environment can gather three-dimensional space information of physical-virtual-social, a new education application form and content can be created, and new advantages of future education development can be molded. For example, the path guidance in the virtual-real fusion environment can assist teachers and students with weak perceptibility to effectively predict directions and plan travelling paths in the teaching environment with complex scene elements. Therefore, the AIGC technology is applied to the virtual-real fusion teaching environment, can enrich teaching means, is beneficial to improving the education and teaching effects under the digital transformation large background, and has wide application value.
The path guidance in the current virtual-real fusion environment has a plurality of problems: (1) the travel direction estimation is affected by multipath effects: the current direction estimation is mainly based on the induction areas of scene elements, and the induction areas can interfere with each other in a teaching space with compact layout, so that a user is difficult to accurately predict the travelling direction; (2) path planning does not take into account special constraints: conventional path planning generally only considers static obstacles, does not consider dynamic obstacles, terrain environments and other constraints of the positions of teachers and students, and the generated travel path often does not accord with the expectations of users; (3) overload of path guidance information: in order to make the user more actively participate in teaching activities, a large amount of path guiding information is often provided in a virtual-real fusion environment, including tedious voice prompts, complex image presentation, etc., however, excessive guiding information can increase the cognitive load of the user, resulting in "information navigation" of the user in space.
Disclosure of Invention
Aiming at the defects or improvement demands of the prior art, the invention provides a path guiding system and a working method in a virtual-real fusion environment, and provides an intelligent and systematic interaction method for novel digital teaching.
The object of the invention is achieved by the following technical measures.
The invention provides a path guiding system in a virtual-real fusion environment, which comprises:
the teaching scene creation module is used for shooting a multi-view image, reconstructing a three-dimensional teaching scene, and arranging and adjusting parameters of each scene element;
the user space positioning module is used for collecting point cloud coordinates of the model, generating a three-dimensional map of the teaching space and determining the space position of a user;
the user sight tracking module is used for tracking the head change of the user, updating the information in the view field, deducing the sight direction of the user and determining the sight point of the sight;
the advancing direction estimating module is used for calculating the intersection point between the sight line vector and the attention line segment and deducing the advancing direction of the user in the field of view;
the obstacle recognition module is used for distinguishing dynamic and static obstacles, predicting the motion trail of the dynamic obstacles, and marking obstacle icons on the 2D eagle eye pattern;
the prediction path optimization module is used for predicting a walkable candidate path, gridding a three-dimensional map of a ladder classroom and correcting detour, redundancy, repetition and discontinuous paths;
and the guiding prompt generation module is used for converting the text clues into navigation instructions and updating the pointer direction of the virtual compass according to the sight direction of the user.
The invention also provides a working method of the path guiding system in the virtual-real fusion environment, which comprises the following steps:
(1) Creating a teaching scene, and shooting high-definition images of the ladder classrooms by using a plurality of cameras along the steps of the walkways of the ladder classrooms; generating walls, podium, steps, interactive electronic whiteboard and table and chair elements in the teaching scene by using a multi-view three-dimensional reconstruction algorithm; the layout of the table and the chair is completed at the step plane position, and the position, the orientation and the gesture of each scene element in the step classroom are adjusted by using a conjugate gradient algorithm;
(2) The method comprises the steps of positioning a user space, determining the point cloud 3D coordinates of each model according to the displacement and the direction of the movement of the user and combining the difference value of the point cloud data of the positions before and after the movement, and generating a real-time three-dimensional map of the teaching space; estimating the position coordinates of the user in the three-dimensional map by using a front intersection algorithm; generating a 2D aerial view of the teaching space by using an orthographic projection method, and representing a desk, a chair, a platform and a walkway in the teaching space by using different point, line and surface symbols;
(3) The user sight tracking, the head tracking algorithm is adopted to detect the position and posture parameter change of the user in real time, and the positions and postures of obstacles, traffic lines and other teacher and student users in the view field range of the user are updated; extracting structural features of an eye region image of a user by using a convolutional neural network algorithm, and deducing the sight direction of the user; a visual search algorithm is adopted to determine the gaze point coordinates of the user, and the region where the gaze point is located is determined to be an interest region;
(4) Estimating the travelling direction, analyzing a surface patch set by using a least square fitting algorithm, extracting barycenter coordinates of each approximate triangular surface patch, and connecting each barycenter to generate a gazing line segment; obtaining a sight line vector according to the pupil position and the sight line direction of a user, using a parameterized equation to represent a line segment of a fixation object, and calculating an intersection point between the vector and the parameterized equation by using a numerical method; deducing the direction in which the user can travel in the field of view by adopting a course estimation algorithm;
(5) Identifying obstacles, namely identifying dynamic and static obstacles by using a convolutional neural network algorithm according to the obstacles on walls, podium, table and chair and other user as user travelling routes; predicting the motion trail of the dynamic obstacle by using a heuristic algorithm based on particle swarm optimization; scaling the grid image of the obstacle icon by adopting a nearest neighbor sampling algorithm, and marking the grid image to an eagle eye pattern;
(6) Predicting path optimization, namely predicting a walkable candidate path by using a path planning algorithm by taking the current position of a user as a starting point and combining the characteristic information of a three-dimensional map in a ladder classroom, static barriers and the positions of other teachers and students; dividing a three-dimensional map of a stair classroom into square grids on a horizontal plane according to the average step length of a user; correcting detour, redundant, duplicate and discontinuous paths using bicubic interpolation algorithms;
(7) Generating a guiding prompt, namely generating text clues of up steps, down steps, straight going, left turning and right turning by using a generating countermeasure network algorithm; converting text clues into navigation instructions by using a word frequency-inverse document frequency algorithm, sequentially adopting acoustic features to generate a network and a vocoder, and mapping the features into voice signals; and updating the pointer direction of the virtual compass according to the estimation result of the sight direction of the user.
The invention has the beneficial effects that:
a three-dimensional map of a teaching space is generated by adopting a computer vision and deep learning technology, a user sight line is deduced according to a user sight field range, the space sight line is positioned, a fixation object is segmented, an intersection point between the user sight line and the segmented fixation object is detected, the advancing direction is estimated, a path is planned according to static and dynamic obstacles in the teaching space and a predicted dynamic obstacle track, and the path is guided to advance in the teaching space by text clues, voice navigation and compass pointing prompts. Compared with the prior art, the invention can realize more accurate prediction of the advancing direction according to the view field information of the user and combine the motion trail of the dynamic obstacle, thereby realizing the advancing route which is more in line with the expectations of the user, reducing the complicated route guiding information, improving the space cognition ability and the autonomous exploration ability of the user in the complex environment, and realizing the efficient teaching activity.
Drawings
Fig. 1 is a schematic diagram of a path guidance system in a virtual-real fusion environment according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a 3D model of a teaching scene in an embodiment of the invention.
Fig. 3 is a schematic diagram of a ladder classroom scene element layout in an embodiment of the invention.
Fig. 4 is a schematic diagram of the positioning of a user space in a stair classroom according to an embodiment of the present invention, 401-a light source on a front wall, 402-a light source on a rear wall, 403-a light source on a left wall, 404-a light source on a right wall, 405-a light source on a top wall, and 406-the location of the user.
FIG. 5 is a schematic view of a 2D bird's eye view of a stair classroom in an embodiment of the invention, 501-classroom podium, 502-classroom desk and chair, 503-user's gaze area, 504-user's spatial location, 505-range indicator.
FIG. 6 is a diagram of a convolutional neural network architecture in an embodiment of the present invention, 601-an eye area image, 602-a convolutional neural network layer with a convolutional kernel of 9, 603-a max pooling layer, 604-a convolutional neural network layer with a convolutional kernel of 7, 605-an average pooling layer, 606-a full connection layer, 607-a straight ahead line of sight direction, 608-an upper line of sight defense, 609-a lower line of sight direction, 610-a left line of sight direction, 611-a right line of sight direction, 612-an obliquely upper line of sight direction, 613-an obliquely lower line of sight direction.
Fig. 7 is a schematic view of a gaze segment of a virtual object within a user field of view in an embodiment of the present invention.
Fig. 8 is a schematic diagram of a three-dimensional compass in an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and embodiments, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
As shown in fig. 1, the present embodiment provides a path guidance system in a virtual-real fusion environment, which includes a teaching scene creation module, a user space positioning module, a user sight tracking module, a travel direction estimation module, an obstacle recognition module, a predicted path optimization module, and a guidance prompt generation module.
The working method of the path guiding system in the virtual-real fusion environment comprises the following steps:
(1) Teaching scene creation. A multi-view camera is used for shooting high-definition images of the ladder classrooms along the steps of the walkways of the ladder classrooms; generating walls, podium, steps, interactive electronic whiteboard and table and chair elements in the teaching scene by using a multi-view three-dimensional reconstruction algorithm; and (3) completing the layout of the table and the chair at the plane position of the ladder, and adjusting the position, the orientation and the gesture of each scene element in the ladder classroom by using a conjugate gradient algorithm.
(1-1) multi-view image acquisition of a teaching scene. The method comprises the steps of uniformly setting exposure time and shutter speed of a multi-view camera, photographing high-definition images of a ladder classroom by using the multi-view camera along a pavement step of the ladder classroom, estimating camera gestures of the high-definition images by using a multi-view geometric algorithm, and dividing the high-definition images into basic, local, oblique view, rotation and time sequence views according to the camera gestures.
(1-2) scene model generation. According to the parallel view angle, the overlapping view angle and the space-time relation between the multi-view images, a multi-view three-dimensional reconstruction algorithm is used for generating wall, podium, step, interactive electronic whiteboard and table and chair elements in a teaching scene, a Gaussian filter bank is adopted for acquiring directional, detail, color, luster and structural texture information of a model in the images, and a texture mapping algorithm is used for mapping the texture information to a 3D model, so that a scene model shown in figure 2 is generated.
(1-3) scene element layout, as shown in fig. 3. And calculating elevation and slope parameters of a ladder classroom in the view image by using an elevation difference algorithm, determining the height, width and length of each ladder, completing the layout of tables and chairs at the plane positions of the ladder, distributing and placing a podium and an interactive electronic whiteboard in the front end region of the classroom, and adjusting the positions, orientations and postures of all scene elements in the ladder classroom by using a conjugate gradient algorithm. The specific steps of adjusting the scene elements of the ladder classroom are as follows:
i: initializing the positions, orientations and attitudes of scene elements to be p, o and t;
II: an objective function is constructed using equation 1:
wherein p ', o ' and t ' respectively represent the adjusted position, orientation and gesture of the scene element;
III: the gradient calculation is shown in equation 2:
IV: updating the position, orientation and pose of the scene element using equation 3:
where α is the search step size.
(2) And (5) user space positioning. According to the displacement and the direction of the movement of the user, the point cloud 3D coordinates of each model are determined by combining the difference value of the point cloud data of the positions before and after the movement, and a real-time three-dimensional map of the teaching space is generated; estimating the position coordinates of the user in the three-dimensional map by using a front intersection algorithm; generating a 2D aerial view of the teaching space by using an orthographic projection method, and representing a desk, a chair, a platform and a walk in the teaching space by using different point, line and surface symbols.
(2-1) generating a three-dimensional map of the teaching space. When the user wears the head display to move in the teaching space, an ultra wideband communication (UWB) sensor arranged in the head display scans the teaching scene, deduces the layout of each model in the teaching scene before the user according to the received reflection wavelength, and determines the point cloud 3D coordinates of each model according to the displacement and the direction of the movement of the user and the difference value of the point cloud data at the position before and after the movement, so as to generate a real-time three-dimensional map of the teaching space.
(2-2) user location determination, as shown in fig. 4. The method comprises the steps of installing non-coplanar light sources on front, back, left, right and top wall surfaces of a ladder classroom, taking a user as a target object, capturing angle information of incident light received by the target object by using a photosensitive sensor, representing the position and the direction of a teaching space by adopting a spherical coordinate system, and estimating the position coordinates of the user in a three-dimensional map according to the angle information between the user and each light source by using a front intersection algorithm. Estimating user position coordinates, namely:
i: the light sources of the front, back, left, right and top wall surfaces of the ladder classroom shown in fig. 4 are respectively defined as A, B, C, D and E, five points are in a non-coplanar form, and the spatial position coordinates of the light sources are obtained as (X) A ,Y A ,Z A )、(X B ,Y B ,Z B )、(X C ,Y C ,Z C )、(X D ,Y D ,Z D ) And (X) E ,Y E ,Z E );
II: calculating the coordinate vector m of the image point corresponding to each light source by using the formula 4 A 、m B 、m C 、m D And m E
Wherein K, R and T are respectively the in-camera parameter matrix, the rotation matrix and the translation vector of the camera, (X) i ,Y i ,Z i ) Representing the spatial coordinates of the light source i= { a, B, C, D, E } in the stair classroom;
III: calculating the line-of-sight vector for each light source using equation 5And->
IV: the spatial coordinates of the front intersection are calculated as shown in equation 6:
wherein P is i The space coordinates of the front intersection point corresponding to the light source i are represented, and O represents the coordinates of the camera optical center in a world coordinate system;
v: estimating user position coordinates using equation 7:
wherein,a line vector representing light sources j, k= { a, B, C, D, E }, and j+.k;
VI: acquiring the current space position coordinate of the user as (X) u ,Y u ,Z u )。
(2-3) eagle eye map generation. Generating a 2D aerial view of the teaching space shown in fig. 5 by using an orthographic projection method, using different points, lines and surface symbols to represent a table chair, a platform and a walkway in the teaching space, marking the space position of a user on an eagle eye map by using a yellow circle, using a white conical surface to represent the sight line area of the user, and refreshing the positions of the circle and the conical surface according to the change of the position and the sight line of the user.
(3) User gaze tracking. Detecting the position and posture parameter change of the user in real time by adopting a head tracking algorithm, and updating the positions and postures of obstacles, traffic lines and other teacher and student users in the view field range of the user; extracting structural features of an eye region image of a user by using a convolutional neural network algorithm, and deducing the sight direction of the user; and determining the gaze point coordinates of the user by adopting a visual search algorithm, and determining the region where the gaze point is located as the region of interest.
(3-1) visual information acquisition. According to the head gesture and the sight direction of the user, a user interface self-adaptive algorithm is used for obtaining the range of the user visual field, a head tracking algorithm is used for detecting the position and gesture parameter change of the user in real time, steps on a travel route are recognized in real time, and the positions and gestures of obstacles, traffic lines and other teachers and students in the range of the user visual field are updated.
(3-2) user gaze inference. The camera built in the head display is used for tracking the sight line change of a user in the advancing process, the anti-shake algorithm based on local motion estimation is used for eliminating motion blur of the image, the target detection algorithm is used for identifying the eye region of the image, the convolutional neural network algorithm shown in fig. 6 is used for extracting the pupil, iris, cornea, eyelid and orbit characteristics of the eye region, and the sight line direction of the user is deduced. The specific steps of deducing the sight direction of the user are as follows:
i: defining the user's line of sight as straight ahead, above, below, left, right, obliquely above and obliquely below, and denoted as y 0 、y 1 、y 2 、y 3 、y 4 、y 5 And y 6
II: obtaining an eye region image X= [ X ] from top to bottom i,j ]Wherein x is ij Pixel points representing the i = {0,1,2,..n-1 } row and j = {0,1,2,..m-1 } column of the image;
III: pupil, iris, cornea, eyelid and orbital eye feature vectors of the eye region are extracted using equation 8:
wherein O is l Is the value of the first = {0,1,2,3,4,5,6} element in the feature vector, x l-i,l-j Pixel values representing the first-i row and first-j column of the input image, a i,j The weight value of the ith row and the jth column of the convolution kernel is represented, and b represents offset;
IV: using equation 9, infer a probability value for the user gaze direction category:
v: the ordered gaze direction category probability value is P (y 0 ),P(y 1 ),P(y 5 ),P(y 3 ),P(y 6 ),P(y 2 ) And P (y) 4 ) The user's gaze direction is estimated to be straight ahead.
(3-3) spatial line-of-sight positioning. And calculating the sight line direction, the focus distance and the sight line track attribute by using a support vector machine algorithm, determining the sight-line gaze point coordinate of the user by using a visual search algorithm, mapping the sight attention of the user to a teaching space by using a transformation matrix, dividing the teaching space into different areas according to the field of view of the user, and determining the area where the gaze point is located as an interest area.
(4) And estimating the travelling direction. Analyzing the surface patch set by using a least square fitting algorithm, extracting barycentric coordinates of each approximate triangular surface patch, and connecting each barycenter to generate a gazing line segment; obtaining a sight line vector according to the pupil position and the sight line direction of a user, using a parameterized equation to represent a line segment of a fixation object, and calculating an intersection point between the vector and the parameterized equation by using a numerical method; a heading estimation algorithm is employed to infer the direction in which the user may travel within the field of view.
(4-1) line segmentation generation of the fixation object. Extracting point cloud data in the field of view of a user, determining the name of an object watched by the user in the point cloud data by using a virtual object searching algorithm, processing the object point cloud by using a gridding operation, generating a triangular grid patch set of each object, analyzing the patch set by using a least square fitting algorithm, extracting barycenter coordinates of each approximate triangular patch, and connecting each barycenter to generate a watched line segment shown in fig. 7. The method comprises the specific steps of generating a gazing line segment:
i: obtain the vertex coordinates of the i= {1,2,3,..n } approximate triangular patches as (x i1 ,y i1 ,z i1 ),(x i2 ,y i2 ,z i2 ),(x i3 ,y i3 ,z i3 );
II: the normal vector of the ith triangular mesh patch is calculated using equation 10:
wherein (a, b, c) is a normal vector of the plane;
III: calculating the barycentric coordinates of the triangle plane using equation 11:
IV: according to step III, the barycentric coordinates of two adjacent ith and j-th triangular patches are obtained as (x) ic ,y ic ,z ic ) And (x) jc ,y jc ,z jc );
V: connecting the barycentric coordinates to obtain a gaze segment as shown in equation 12:
(4-2) line-of-sight intersection detection with the line segment. The starting point and the direction of the sight line of the user are expressed as space vectors, a parameterized equation is used for expressing the line segment of the watched object, a numerical method is used for calculating the intersection point between the vector and the parameterized equation, if the intersection point exists, the situation that the obstacle object exists in front is expressed, and the user cannot travel along the sight line direction is shown; otherwise, the vehicle can travel along the line of sight.
(4-3) estimation of direction. The gait detection and step length estimation algorithm is used for collecting the running speed, acceleration and direction motion state of a user on a ladder and a plane in real time, the direction in which the user can run in the view field range is deduced by the course estimation algorithm according to the detection result of the intersection of the line of sight and the line segment, and if the view field range of the user changes, a new candidate route which can run is estimated again.
(5) And (5) identifying the obstacle. The wall, the podium, the desk and the chair and other users are used as barriers on the travelling routes, and a convolutional neural network algorithm is adopted to distinguish dynamic and static barriers; predicting the motion trail of the dynamic obstacle by using a heuristic algorithm based on particle swarm optimization; the raster image of the obstacle icon is scaled using a nearest neighbor sampling algorithm and marked into the eagle eye pattern.
(5-1) obstacle classification. According to the types of scene elements on the three-dimensional map of the teaching space, walls, podium, tables, chairs and other user elements are used as barriers on a moving route, the outline shape of each barrier is obtained by adopting an edge detection and outline extraction algorithm, the classification of dynamic and static barriers is realized by adopting a convolutional neural network algorithm, and the motion direction, speed and acceleration attribute of the dynamic barrier are obtained in real time by using a motion model detection algorithm.
(5-2) dynamic obstacle trajectory prediction. According to the motion state of the dynamic obstacle, a uniform motion and uniform acceleration motion model is constructed, a heuristic algorithm based on particle swarm optimization is used for predicting the motion trail of the dynamic obstacle, a quasi-Newton algorithm is used for adjusting the trail parameters of curvature, flexibility and smoothness, the motion trail of the dynamic obstacle is smoothed, and a spatial position coordinate sequence of the trail is obtained.
(5-3) obstacle marking. According to the outline shape of the obstacle, adopting the outline of the red filling obstacle, taking the generated filling image as a symbol of the obstacle, scaling an icon by using a nearest neighbor sampling algorithm, marking the icon on a grid image in equal proportion to an eagle eye pattern, and representing a motion track prediction result of the dynamic obstacle by using a highlighted blue dotted line segment.
(6) And (5) predicting path optimization. Predicting a walkable candidate path by using a path planning algorithm by taking the current position of a user as a starting point and combining the characteristic information of a three-dimensional map in a ladder classroom, static barriers and the positions of other teachers and students; dividing a three-dimensional map of a stair classroom into square grids on a horizontal plane according to the average step length of a user; the detour, redundant, duplicate and discontinuous paths are corrected using bicubic interpolation algorithms.
(6-1) path prediction. And calculating the height, width and length information of each step in the ladder classroom within the view range of the user in real time by using a height Cheng Chazhi algorithm, and predicting a walkable candidate path by using a path planning algorithm according to the end position set on the eagle eye diagram by the user and by taking the current position of the user as a starting point and combining the characteristic information of a three-dimensional map in the ladder classroom, static barriers and the positions of other teachers and students.
(6-2) meshing of paths. Dividing a three-dimensional map of a stair classroom into square grids on a horizontal plane according to the average step length of a user, setting the grids projected by dynamic and static obstacles as non-passable marks, calculating the center points of the grids, constructing the intersection points of all candidate paths and the center points into a new discrete point set, and screening whether the paths can pass or not according to whether the grid units passing by the two adjacent points are occupied by the obstacles or not. The specific steps of calculating the grid center point are as follows:
i: the number of grids was calculated using equation 13:
wherein,representing an upward rounding function, wherein L is S, and the candidate path length and the grid area are respectively;
II: the position coordinates of the mesh on the candidate path are calculated as shown in equation 14:
x i ,y i ,z i =x 0 +i·S·cos(θ),y 0 +i·S·sin(θ),z 0 +i.S.tan (θ) (equation 14)
Wherein, (x) i ,y i ,z i ) Represents the spatial coordinates of the i = {1,2,3,..n } grid, (x 0 ,y 0 ,z 0 ) And θ represents the start point coordinates and the direction angle of the path segment, respectively;
III: calculating the ith grid center point coordinates using equation 15:
(6-3) Path optimization. And eliminating candidate paths with non-passable grid units, calculating an evaluation value of the paths by adopting a support vector machine algorithm, obtaining a path with the minimum evaluation value, correcting detour, redundancy, repetition and discontinuous paths by using a bicubic interpolation algorithm, and fitting and smoothing the corrected candidate paths by using a reinforcement learning algorithm based on a value function.
(7) And generating a guide prompt. Generating text clues of up steps, down steps, straight going, left turning and right turning by using a generating countermeasure network algorithm; converting text clues into navigation instructions by using a word frequency-inverse document frequency algorithm, sequentially adopting acoustic features to generate a network and a vocoder, and mapping the features into voice signals; and updating the pointer direction of the virtual compass according to the estimation result of the sight direction of the user.
(7-1) text cue generation. The method comprises the steps of taking a path discrete point set as an input sequence, using a sequence conversion model, combining an attention mechanism to generate position codes of all path points, extracting position, distance and direction characteristics in the position codes by using a bidirectional cyclic neural network algorithm, and generating text clues of up-steps, down-steps, straight-going, left-turning and right-turning by using an antagonistic network algorithm. The text clue generation specific steps are as follows:
i: defining the position, distance and direction features in position coding as x loc 、x dis And x dir
II: stitching feature as input feature vector x= { X loc ,x dis ,x dir };
III: the noise vector is calculated using equation 16:
z=g (wx+b) (formula 16)
Wherein W and b are weight and threshold, g represents normalized exponential function;
IV: word vectors are calculated using equation 17:
V=f((W in +W res +W back ) z) (equation 17)
Wherein W is in 、W res And W is back Respectively inputting a connection weight, an internal connection weight and a feedback connection weight, wherein f is a state activation function;
v: converting the term vector into upper step, lower step, straight, left turn, and right turn terms using equation 18:
word=v2w (V) (equation 18)
Wherein w2v is a pre-training word2vec inverted word vector matrix.
(7-2) sound path guidance. The text clue is converted into a navigation instruction by using a word frequency-inverse document frequency algorithm, the voice characteristics of rhythm, beat, melody, harmony, timbre, rhythm and tone are added to the instruction by using a voice synthesis algorithm, an acoustic characteristic generation network and a vocoder are sequentially adopted, filtering, noise reduction and gain operation processing are realized, and the characteristics are mapped into voice signals.
(7-3) visual path guidance. Using the virtual compass as shown in fig. 8, using light blue to represent compass pointer, based on the estimation result of user's sight direction, combining the space position of the ladder classroom, updating the pointer direction of the virtual compass, if the user's sight changes, updating the pointer angle in real time.
What is not described in detail in this specification is prior art known to those skilled in the art.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents and improvements made within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (9)

1. The utility model provides a path guidance system in virtual and real fusion environment which characterized in that: the system comprises a teaching scene creation module, a user space positioning module, a user sight tracking module, a traveling direction estimation module, an obstacle recognition module, a predicted path optimization module and a guiding prompt generation module;
the teaching scene creation module is used for shooting a multi-view image, reconstructing a three-dimensional teaching scene, and arranging and adjusting parameters of each scene element;
the user space positioning module is used for collecting point cloud coordinates of the model, generating a three-dimensional map of the teaching space and determining the space position of a user;
the user sight tracking module is used for tracking the head change of the user, updating the information in the view field, deducing the sight direction of the user and determining the sight-line point;
the advancing direction estimating module is used for calculating the intersection point between the sight line vector and the attention line segment and deducing the advancing direction of the user in the view field;
the obstacle recognition module is used for distinguishing dynamic obstacles from static obstacles, predicting the motion trail of the dynamic obstacles, and marking obstacle icons on a 2D eagle eye pattern;
the prediction path optimization module is used for predicting a walkable candidate path, gridding a three-dimensional map of a ladder classroom and correcting detour, redundancy, repetition and discontinuous paths;
the guiding prompt generation module is used for converting the text clues into navigation instructions and updating the pointer direction of the virtual compass according to the sight direction of the user.
2. A method of operating a path guidance system in a virtual-real fusion environment as claimed in claim 1, characterized in that the method comprises the steps of:
(1) Creating a teaching scene, and shooting high-definition images of the ladder classrooms by using a plurality of cameras along the steps of the walkways of the ladder classrooms; generating walls, podium, steps, interactive electronic whiteboard and table and chair elements in the teaching scene by using a multi-view three-dimensional reconstruction algorithm; the layout of the table and the chair is completed at the step plane position, and the position, the orientation and the gesture of each scene element in the step classroom are adjusted by using a conjugate gradient algorithm;
(2) The method comprises the steps of positioning a user space, determining the point cloud 3D coordinates of each model according to the displacement and the direction of the movement of the user and combining the difference value of the point cloud data of the positions before and after the movement, and generating a real-time three-dimensional map of the teaching space; estimating the position coordinates of the user in the three-dimensional map by using a front intersection algorithm; generating a 2D aerial view of the teaching space by using an orthographic projection method, and representing a desk, a chair, a platform and a walkway in the teaching space by using different point, line and surface symbols;
(3) The user sight tracking, the head tracking algorithm is adopted to detect the position and posture parameter change of the user in real time, and the positions and postures of obstacles, traffic lines and other teacher and student users in the view field range of the user are updated; extracting structural features of an eye region image of a user by using a convolutional neural network algorithm, and deducing the sight direction of the user; a visual search algorithm is adopted to determine the gaze point coordinates of the user, and the region where the gaze point is located is determined to be an interest region;
(4) Estimating the travelling direction, analyzing a surface patch set by using a least square fitting algorithm, extracting barycenter coordinates of each approximate triangular surface patch, and connecting each barycenter to generate a gazing line segment; obtaining a sight line vector according to the pupil position and the sight line direction of a user, using a parameterized equation to represent a line segment of a fixation object, and calculating an intersection point between the vector and the parameterized equation by using a numerical method; deducing the direction in which the user can travel in the field of view by adopting a course estimation algorithm;
(5) Identifying obstacles, namely identifying dynamic and static obstacles by using a convolutional neural network algorithm according to the obstacles on walls, podium, table and chair and other user as user travelling routes; predicting the motion trail of the dynamic obstacle by using a heuristic algorithm based on particle swarm optimization; scaling the grid image of the obstacle icon by adopting a nearest neighbor sampling algorithm, and marking the grid image to an eagle eye pattern;
(6) Predicting path optimization, namely predicting a walkable candidate path by using a path planning algorithm by taking the current position of a user as a starting point and combining the characteristic information of a three-dimensional map in a ladder classroom, static barriers and the positions of other teachers and students; dividing a three-dimensional map of a stair classroom into square grids on a horizontal plane according to the average step length of a user; correcting detour, redundant, duplicate and discontinuous paths using bicubic interpolation algorithms;
(7) Generating a guiding prompt, namely generating text clues of up steps, down steps, straight going, left turning and right turning by using a generating countermeasure network algorithm; converting text clues into navigation instructions by using a word frequency-inverse document frequency algorithm, sequentially adopting acoustic features to generate a network and a vocoder, and mapping the features into voice signals; and updating the pointer direction of the virtual compass according to the estimation result of the sight direction of the user.
3. The working method of the path guidance system in the virtual-real fusion environment according to claim 2, wherein the teaching scene creation in the step (1) specifically includes:
(1-1) multi-view image acquisition of a teaching scene, uniformly setting exposure time and shutter speed of a multi-view camera, shooting high-definition images of a ladder classroom by using the multi-view camera along a pavement step of the ladder classroom, estimating camera gestures of the high-definition images by using a multi-view geometric algorithm, and dividing the high-definition images into basic, local, oblique view, rotation and time sequence views according to the camera gestures; (1-2) generating a scene model, namely generating classroom, podium, step, interactive electronic whiteboard and table and chair elements in a teaching scene by using a multi-view three-dimensional reconstruction algorithm according to parallel view angles, overlapping view angles and space-time relations between multi-view images, acquiring directivity, detail, color, luster and structural texture information of the model in the images by using a Gaussian filter bank, and mapping the texture information to a 3D model by using a texture mapping algorithm;
(1-3) scene element layout, namely calculating the elevation and gradient parameters of a ladder classroom in a view image by using an elevation difference algorithm, determining the height, width and length of each ladder, completing the layout of tables and chairs at the plane positions of the ladder, distributing and placing podium and interactive electronic whiteboard in the front end region of the classroom, and adjusting the position, orientation and gesture of each scene element in the ladder classroom by using a conjugate gradient algorithm.
4. The method for operating a path guidance system in a virtual-real fusion environment according to claim 2, wherein the user space positioning in step (2) specifically includes:
(2-1) generating a three-dimensional map of a teaching space, wherein when a user wears a head display to move in the teaching space, an ultra-wideband communication sensor is arranged in the head display to scan the teaching scene, the layout of each model in the teaching scene before the user is deduced according to the received reflection wavelength, and the 3D coordinates of point clouds of each model are determined according to the displacement and the direction of the movement of the user and the difference value of point clouds before and after the movement, so as to generate a real-time three-dimensional map of the teaching space;
(2-2) determining the position of a user, installing non-coplanar light sources on the front, back, left, right and top wall surfaces of a ladder classroom, capturing angle information of incident light received by the target object by using a photosensitive sensor, representing the position and the direction of a teaching space by adopting a spherical coordinate system, and estimating the position coordinates of the user in a three-dimensional map by using a front intersection algorithm according to the angle information between the user and each light source;
(2-3) generating an eagle eye map, generating a 2D aerial view of the teaching space by using an orthographic projection method, using different points, lines and surface symbols to represent tables, chairs, podium and walkways in the teaching space, marking the space position of a user on the eagle eye map by using a yellow circle, using a white conical surface to represent the sight line area of the user, and refreshing the positions of the circle and the conical surface according to the change of the position and the sight line of the user.
5. The method for operating a path guidance system in a virtual-real fusion environment according to claim 2, wherein the user gaze tracking in step (3) specifically comprises:
(3-1) visual information acquisition, namely acquiring the range of a user visual field by using a user interface self-adaptive algorithm according to the head gesture and the sight direction of the user, detecting the position and gesture parameter change of the user in real time by using a head tracking algorithm, identifying steps on a travelling route in real time, and updating the positions and gestures of obstacles, passing lines and other teacher and student users in the range of the user visual field;
(3-2) estimating the sight of the user, tracking the sight change of the user in the travelling process by using a camera arranged in a head display, eliminating the motion blur of the image by using an anti-shake algorithm based on local motion estimation, identifying the eye region of the image by using a target detection algorithm, extracting the pupil, iris, cornea, eyelid and orbit characteristics of the eye region by using a convolutional neural network algorithm, and estimating the sight direction of the user;
(3-3) space sight positioning, namely calculating the sight direction, the focus distance and the sight track attribute by using a support vector machine algorithm, determining the sight-point coordinates of the user by using a visual search algorithm, mapping the sight attention of the user to a teaching space by using a transformation matrix, dividing the teaching space into different areas according to the field of view range of the user, and determining the area where the sight point is located as an interest area.
6. The method for operating a path guidance system in a virtual-real fusion environment according to claim 2, wherein the estimating the traveling direction in step (4) specifically includes:
(4-1) generating line segments of the fixation objects, extracting point cloud data in the field of view of a user, determining names of the fixation objects of the user in the point cloud data by using a virtual object searching algorithm, processing the object point clouds by using gridding operation, generating triangular grid patch sets of all the objects, analyzing the patch sets by using a least square fitting algorithm, extracting barycenter coordinates of all the triangular patches, and connecting all barycenters to generate the fixation line segments;
(4-2) detecting intersection of the sight line and the line segment, obtaining a sight line vector according to the pupil position and the sight line direction of the user, using a parameterized equation to represent the line segment of the fixation object, using a numerical method to calculate an intersection point between the vector and the parameterized equation, and if the intersection point exists, indicating that an obstacle object exists in front, and the user cannot travel along the sight line direction; otherwise, the vehicle can travel along the sight line direction;
(4-3) estimating the direction, namely acquiring the running speed, acceleration and direction motion state of the user on the steps and the plane in real time by using gait detection and step length estimation algorithms, deducing the running direction of the user in the view field range by using course estimation algorithms according to the detection result of the intersection of the line of sight and the line segment, and re-estimating a new running candidate route if the view field range of the user changes.
7. The method for operating a path guidance system in a virtual-real fusion environment according to claim 2, wherein the obstacle recognition in step (5) specifically includes:
(5-1) classifying the obstacles, namely according to the types of scene elements on a three-dimensional map of a teaching space, using a wall, a podium, a table and a chair and other users as the obstacles on a moving route, adopting an edge detection and contour extraction algorithm to obtain the contour shape of each obstacle, adopting a convolutional neural network algorithm to realize the classification of dynamic and static obstacles, and using a motion model detection algorithm to obtain the motion direction, speed and acceleration attribute of the dynamic obstacle in real time;
(5-2) predicting a dynamic obstacle track, constructing a uniform motion and uniform acceleration motion model according to the motion state of the dynamic obstacle, predicting the motion track of the dynamic obstacle by using a heuristic algorithm based on particle swarm optimization, regulating curvature, deflection and smoothness track parameters by using a quasi-Newton algorithm, smoothing the motion track of the dynamic obstacle, and obtaining a space position coordinate sequence of the track; (5-3) marking the obstacle, namely filling the obstacle with red according to the outline shape of the obstacle, taking the generated filling image as a symbol of the obstacle, scaling the grid image of the icon by using a nearest neighbor sampling algorithm, marking the icon to an eagle eye pattern, and representing a motion trail prediction result of the dynamic obstacle by using a highlighted blue dotted line segment.
8. The method for operating a path guidance system in a virtual-real fusion environment according to claim 2, wherein the predicting path optimization in step (6) specifically includes:
(6-1) predicting a path, namely calculating the height, width and length information of each step in a ladder classroom in the view range of a user in real time by using a high Cheng Chazhi algorithm, and predicting a walkable candidate path by using a path planning algorithm according to the end position set by the user on an eagle eye diagram, taking the current position of the user as a starting point, and combining the characteristic information of a three-dimensional map in the ladder classroom, the positions of static obstacles and other teacher and student users;
(6-2) meshing subdivision of paths, namely dividing a three-dimensional map of a ladder classroom into square grids on a horizontal plane according to average step length of a user, setting grids projected by dynamic and static obstacles as non-passable marks, calculating grid center points, constructing a new discrete point set by intersections of all candidate paths and the center points, and screening whether the paths can pass according to whether grid units passed by adjacent two points are occupied by the obstacles or not;
(6-3) path optimization, namely eliminating candidate paths with non-passable grid cells, calculating an evaluation value of the paths by adopting a support vector machine algorithm, obtaining the path with the minimum evaluation value, correcting the detour, redundancy, repetition and discontinuous paths by using a bicubic interpolation algorithm, and fitting and smoothing the corrected candidate paths by using a reinforcement learning algorithm based on a value function.
9. The method for operating a path guidance system in a virtual-actual fusion environment according to claim 2, wherein the generating guidance prompts in step (7) specifically includes:
(7-1) generating text clues, namely using a path discrete point set as an input sequence, using a sequence conversion model and combining an attention mechanism to generate position codes of all path points, extracting position, distance and direction characteristics in the position codes by using a bidirectional cyclic neural network algorithm, and generating the text clues of up-steps, down-steps, straight-going, left-turning and right-turning by using an antagonistic network algorithm;
(7-2) guiding a sound path, converting text clues into navigation instructions by using a word frequency-inverse document frequency algorithm, adding rhythm, beat, melody, harmony, timbre, rhythm and tone voice characteristics to the instructions by using a voice synthesis algorithm, sequentially adopting an acoustic characteristic generation network and a vocoder to realize filtering, noise reduction and gain operation processing, and mapping the characteristics into voice signals;
(7-3) visual path guidance, using a virtual compass, using a blue representation compass pointer, updating the pointer direction of the virtual compass according to the estimation result of the user's sight direction and the spatial position of the stair classroom, and if the user's sight changes, updating the pointer angle in real time.
CN202311497689.9A 2023-11-11 2023-11-11 Working method of path guiding system in virtual-real fusion environment Active CN117475115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311497689.9A CN117475115B (en) 2023-11-11 2023-11-11 Working method of path guiding system in virtual-real fusion environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311497689.9A CN117475115B (en) 2023-11-11 2023-11-11 Working method of path guiding system in virtual-real fusion environment

Publications (2)

Publication Number Publication Date
CN117475115A true CN117475115A (en) 2024-01-30
CN117475115B CN117475115B (en) 2024-06-21

Family

ID=89634575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311497689.9A Active CN117475115B (en) 2023-11-11 2023-11-11 Working method of path guiding system in virtual-real fusion environment

Country Status (1)

Country Link
CN (1) CN117475115B (en)

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120099804A1 (en) * 2010-10-26 2012-04-26 3Ditize Sl Generating Three-Dimensional Virtual Tours From Two-Dimensional Images
RO200600557A8 (en) * 2006-07-11 2015-01-30 Vistrian Mătieş Portable laboratory for mechatronic education
CN107316344A (en) * 2017-05-18 2017-11-03 深圳市佳创视讯技术股份有限公司 A kind of method that Roam Path is planned in virtual reality fusion scene
CN108073432A (en) * 2016-11-07 2018-05-25 亮风台(上海)信息科技有限公司 A kind of method for displaying user interface of head-mounted display apparatus
CN109724610A (en) * 2018-12-29 2019-05-07 河北德冠隆电子科技有限公司 A kind of method and device of full information real scene navigation
US10353532B1 (en) * 2014-12-18 2019-07-16 Leap Motion, Inc. User interface for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
CN110110389A (en) * 2019-04-03 2019-08-09 河南城建学院 A kind of indoor and outdoor evacuation emulation method that actual situation combines
CN111400963A (en) * 2020-03-04 2020-07-10 山东师范大学 Crowd evacuation simulation method and system based on chicken swarm algorithm and social force model
CN112230772A (en) * 2020-10-14 2021-01-15 华中师范大学 Virtual-actual fused teaching aid automatic generation method
CN112540673A (en) * 2020-12-09 2021-03-23 吉林建筑大学 Virtual environment interaction method and equipment
CN113269832A (en) * 2021-05-31 2021-08-17 长春工程学院 Electric power operation augmented reality navigation system and method for extreme weather environment
CN113340294A (en) * 2021-06-02 2021-09-03 南京师范大学 Landmark-fused AR indoor map navigation method
CN113672097A (en) * 2021-10-22 2021-11-19 华中师范大学 Teacher hand perception interaction method in three-dimensional comprehensive teaching field
CN113837059A (en) * 2021-09-22 2021-12-24 哈尔滨工程大学 Patrol vehicle for advising pedestrians to wear mask in time and control method thereof
CN114038254A (en) * 2021-11-01 2022-02-11 华中师范大学 Virtual reality teaching method and system
CN114034308A (en) * 2020-08-03 2022-02-11 南京翱翔信息物理融合创新研究院有限公司 Navigation map construction method based on virtual-real fusion
CN115470707A (en) * 2022-09-22 2022-12-13 上海时氪信息技术有限公司 City scene simulation system
CN115985122A (en) * 2022-10-31 2023-04-18 内蒙古智能煤炭有限责任公司 Unmanned system sensing method
CN116188700A (en) * 2023-04-27 2023-05-30 南京大圣云未来科技有限公司 System for automatically generating 3D scene based on AIGC
CN116576857A (en) * 2023-04-19 2023-08-11 东北大学 Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar
CN116644666A (en) * 2023-06-02 2023-08-25 西安理工大学 Virtual assembly path planning guiding method based on strategy gradient optimization algorithm
CN116678394A (en) * 2023-05-10 2023-09-01 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Real-time dynamic intelligent path planning method and system based on multi-sensor information fusion
CN116881426A (en) * 2023-08-30 2023-10-13 环球数科集团有限公司 AIGC-based self-explanatory question-answering system

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RO200600557A8 (en) * 2006-07-11 2015-01-30 Vistrian Mătieş Portable laboratory for mechatronic education
US20120099804A1 (en) * 2010-10-26 2012-04-26 3Ditize Sl Generating Three-Dimensional Virtual Tours From Two-Dimensional Images
US10353532B1 (en) * 2014-12-18 2019-07-16 Leap Motion, Inc. User interface for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
CN108073432A (en) * 2016-11-07 2018-05-25 亮风台(上海)信息科技有限公司 A kind of method for displaying user interface of head-mounted display apparatus
CN107316344A (en) * 2017-05-18 2017-11-03 深圳市佳创视讯技术股份有限公司 A kind of method that Roam Path is planned in virtual reality fusion scene
CN109724610A (en) * 2018-12-29 2019-05-07 河北德冠隆电子科技有限公司 A kind of method and device of full information real scene navigation
CN110110389A (en) * 2019-04-03 2019-08-09 河南城建学院 A kind of indoor and outdoor evacuation emulation method that actual situation combines
CN111400963A (en) * 2020-03-04 2020-07-10 山东师范大学 Crowd evacuation simulation method and system based on chicken swarm algorithm and social force model
CN114034308A (en) * 2020-08-03 2022-02-11 南京翱翔信息物理融合创新研究院有限公司 Navigation map construction method based on virtual-real fusion
CN112230772A (en) * 2020-10-14 2021-01-15 华中师范大学 Virtual-actual fused teaching aid automatic generation method
CN112540673A (en) * 2020-12-09 2021-03-23 吉林建筑大学 Virtual environment interaction method and equipment
CN113269832A (en) * 2021-05-31 2021-08-17 长春工程学院 Electric power operation augmented reality navigation system and method for extreme weather environment
CN113340294A (en) * 2021-06-02 2021-09-03 南京师范大学 Landmark-fused AR indoor map navigation method
CN113837059A (en) * 2021-09-22 2021-12-24 哈尔滨工程大学 Patrol vehicle for advising pedestrians to wear mask in time and control method thereof
CN113672097A (en) * 2021-10-22 2021-11-19 华中师范大学 Teacher hand perception interaction method in three-dimensional comprehensive teaching field
CN114038254A (en) * 2021-11-01 2022-02-11 华中师范大学 Virtual reality teaching method and system
CN115470707A (en) * 2022-09-22 2022-12-13 上海时氪信息技术有限公司 City scene simulation system
CN115985122A (en) * 2022-10-31 2023-04-18 内蒙古智能煤炭有限责任公司 Unmanned system sensing method
CN116576857A (en) * 2023-04-19 2023-08-11 东北大学 Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar
CN116188700A (en) * 2023-04-27 2023-05-30 南京大圣云未来科技有限公司 System for automatically generating 3D scene based on AIGC
CN116678394A (en) * 2023-05-10 2023-09-01 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Real-time dynamic intelligent path planning method and system based on multi-sensor information fusion
CN116644666A (en) * 2023-06-02 2023-08-25 西安理工大学 Virtual assembly path planning guiding method based on strategy gradient optimization algorithm
CN116881426A (en) * 2023-08-30 2023-10-13 环球数科集团有限公司 AIGC-based self-explanatory question-answering system

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
VUTHEA CHHEANG: "Group WiM: A Group Navigation Technique for Collaborative Virtual Reality Environments", 《2022 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS (VRW)》, 20 April 2022 (2022-04-20) *
ZHANG JIAYU: "Design of VR Engine Assembly Teaching System", 《ARXIV》, 11 July 2022 (2022-07-11) *
ZHENG ZHONG: "The Representation of Virtual Cultural Heritage Based on Key Events", 《2010 ASIA-PACIFIC CONFERENCE ON WEARABLE COMPUTING SYSTEMS》, 7 June 2010 (2010-06-07) *
百度文库: "虚实融合式学习模式方案的设计和实施", 《HTTPS://WENKU.BAIDU.COM/VIEW/159E9690ADAAD1F34693DAEF5EF7BA0D4A736D9D.HTML?_WKTS_=1715218263527&BDQUERY=%E8%99%9A%E5%AE%9E+%E8%9E%8D%E5%90%88+%E8%B7%AF%E5%BE%84%E5%BC%95%E5%AF%BC+%E6%95%99%E5%AD%A6+%E8%A1%8C%E8%BF%9B+%E6%96%B9%E5%90%91%E4%BC%B0%E7%A, 22 October 2023 (2023-10-22) *
矩道VR虚拟现实课堂: "虚实融合教学场景的广泛应用", 《HTTPS://BAIJIAHAO.BAIDU.COM/S?ID=1780605903465371377&WFR=SPIDER&FOR=PC》, 24 October 2023 (2023-10-24) *
贾芳: "知觉现象学视域下中学教学楼室内公共活动空间设计策略研究", 《中国优秀硕士论文全文数据库》, 15 January 2022 (2022-01-15) *
钟正: "基于VR技术的体验式学习环境设计策略与案例实现", 《中国电化教育》, 28 February 2018 (2018-02-28) *
钟正: "虚拟全景技术在野外实践教学平台开发中的应用", 《城市勘测》, 30 June 2017 (2017-06-30) *

Also Published As

Publication number Publication date
CN117475115B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
US11328158B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous tracking
WO2022121645A1 (en) Method for generating sense of reality of virtual object in teaching scene
CN113096252B (en) Multi-movement mechanism fusion method in hybrid enhanced teaching scene
US10410328B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous device
KR101650799B1 (en) Method for the real-time-capable, computer-assisted analysis of an image sequence containing a variable pose
Duh et al. V-eye: A vision-based navigation system for the visually impaired
CN106840148A (en) Wearable positioning and path guide method based on binocular camera under outdoor work environment
CN107357427A (en) A kind of gesture identification control method for virtual reality device
CN104115192A (en) Improvements in or relating to three dimensional close interactions
CN110033411A (en) The efficient joining method of highway construction scene panoramic picture based on unmanned plane
CN106871906A (en) A kind of blind man navigation method, device and terminal device
CN109000655A (en) Robot bionic indoor positioning air navigation aid
CN109164802A (en) A kind of robot maze traveling method, device and robot
CN115933868A (en) Three-dimensional comprehensive teaching field system of turnover platform and working method thereof
Liu et al. A novel trail detection and scene understanding framework for a quadrotor UAV with monocular vision
Yao et al. Rca: Ride comfort-aware visual navigation via self-supervised learning
CN117475115B (en) Working method of path guiding system in virtual-real fusion environment
Gokl et al. Towards urban environment familiarity prediction
Muhlbauer et al. Navigation through urban environments by visual perception and interaction
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
CN116385757B (en) Visual language navigation system and method based on VR equipment
Kim et al. Improving Gaze Tracking in Large Screens With Symmetric Gaze Angle Amplification and Optimization Technique
WO2022071315A1 (en) Autonomous moving body control device, autonomous moving body control method, and program
Zhu et al. Computer-Aided Hierarchical Interactive Gesture Modeling Under Virtual Reality Environment
Lee et al. CityNav: Language-Goal Aerial Navigation Dataset with Geographic Information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant