CN114020978A - Park digital roaming display method and system based on multi-source information fusion - Google Patents

Park digital roaming display method and system based on multi-source information fusion Download PDF

Info

Publication number
CN114020978A
CN114020978A CN202111146316.8A CN202111146316A CN114020978A CN 114020978 A CN114020978 A CN 114020978A CN 202111146316 A CN202111146316 A CN 202111146316A CN 114020978 A CN114020978 A CN 114020978A
Authority
CN
China
Prior art keywords
user
virtual
interaction
park
gloves
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111146316.8A
Other languages
Chinese (zh)
Other versions
CN114020978B (en
Inventor
赵鹏飞
王维
韩沫
刘海
张权
赵怡梦
魏一博
刘行易
秦烽铭
胡明康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Center of Information Technology of Beijing Academy of Agriculture and Forestry Sciences
Original Assignee
Research Center of Information Technology of Beijing Academy of Agriculture and Forestry Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Center of Information Technology of Beijing Academy of Agriculture and Forestry Sciences filed Critical Research Center of Information Technology of Beijing Academy of Agriculture and Forestry Sciences
Priority to CN202111146316.8A priority Critical patent/CN114020978B/en
Publication of CN114020978A publication Critical patent/CN114020978A/en
Application granted granted Critical
Publication of CN114020978B publication Critical patent/CN114020978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/904Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a park digital roaming display method and a park digital roaming display system based on multi-source information fusion, wherein the method comprises the following steps: the method comprises the steps that a sensor in the universal treadmill is used for tracking the lower limbs of a user, the corresponding moving speed and direction of the user are determined, and the moving state of the user in a virtual scene of a three-dimensional model of a garden is determined according to the moving speed and direction; carry out the upper limbs through the VR gloves that the user wore and track, under the condition of obtaining that the user passes through the VR gloves to the operation of mutual object for the touching, carry out the show of touching effect according to preset mutual information, perhaps under the condition of obtaining that the user passes through the operation of VR gloves to mutual object for snatching, carry out corresponding virtual farming operation to mutual object according to user's operation. The method enables the user to roam and interact in the virtual park personally, thereby stimulating the roaming interest of the user, improving the participation of the user, creating good user experience and enabling the user to better know the development and the operation condition of the park.

Description

Park digital roaming display method and system based on multi-source information fusion
Technical Field
The invention relates to the field of virtual reality, in particular to a park digital roaming display method and system based on multi-source information fusion.
Background
The Virtual Reality (VR) technology is a comprehensive technology integrating multiple technologies such as three-dimensional tracking, mode recognition and multimedia, and has three prominent characteristics of immersion, interactivity and imagination. VR utilizes the computer to generate a lifelike, interactive virtual three-dimensional environment, and the user can obtain the virtual experience of being personally on the scene with the help of special equipment such as helmets, data gloves, operation levers, sensors.
For the digital display of the agricultural park, the VR technology is used in cooperation with external devices such as HTC (high-temperature visual) head-mounted equipment, a control handle and gesture capture, roaming interaction is carried out in real time, multi-dimensional and autonomous interactive operation is achieved, and park scenes are visited from different angles. The three-dimensional roaming mode of the agricultural park based on the VR technology breaks through the limitations of time and space in the display design of the traditional agricultural park, and from the perspective of a user, the agricultural park is displayed in a three-dimensional and all-dimensional mode, so that the user participates in the agricultural park, and the three-dimensional roaming mode generates interaction, and gives people a feeling of being personally on the scene.
The user wears VR head-mounted device, like HTC Vive, catches user's removal data and moving direction under the real environment through two infrared sensor of HTC Vive self-band, and in virtual scene, virtual user also can move. Through the interactive mode, the user can perform autonomous roaming in the constructed virtual scene. However, because the user wears VR glasses, the sight is sheltered from by the VR glasses, and external environment can not be perceived, and the removal in-process can appear hitting the wall or stumble the phenomenon by the connecting wire, has certain potential safety hazard.
Due to the above-mentioned drawbacks, in the agricultural park roaming system, the user is generally restricted from walking in the exhibition space, and the virtual perspective of the user is usually transmitted to the destination directly based on a virtual transmission mechanism. For example, a user may need to walk to point B in a real simulation situation at point a of the campus. But in order to ensure user security, the user can go directly from point a to point B via the virtual delivery mechanism. Due to the design mode, the action simulation of walking of the user is lacked, so that the participation sense of the user is insufficient, and the virtual scene cannot be really merged.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a park digital roaming display method and system based on multi-source information fusion.
The invention provides a park digital roaming display method based on multi-source information fusion, which comprises the following steps: the method comprises the steps that a sensor in the universal treadmill is used for tracking the lower limbs of a user, the corresponding moving speed and direction of the user are determined, and the moving state of the user in a virtual scene of a three-dimensional model of a garden is determined according to the moving speed and direction; carry out the upper limbs through the VR gloves that the user wore and track, under the condition of obtaining that the user passes through the VR gloves to the operation of mutual object for the touching, carry out the show of touching effect according to preset mutual information, perhaps under the condition of obtaining that the user passes through the operation of VR gloves to mutual object for snatching, carry out corresponding virtual farming operation to mutual object according to user's operation.
According to one embodiment of the invention, the park digital roaming display method based on multi-source information fusion further comprises the following steps: acquiring real-time environment data of a real park acquired by a sensor from a background database, and associating interaction points of the real-time environment data in virtual sensors at corresponding positions of a scene respectively; and the associated interaction points are used for displaying according to the real-time environment data or directly displaying the real-time environment data at the virtual sensor interaction points under the condition that the user touches the associated virtual sensor interaction points through the VR gloves.
According to the park digital roaming display method based on multi-source information fusion, before the lower limb tracking of the user is performed through the sensor in the universal treadmill, the method further comprises the following steps: acquiring high-definition images of garden buildings and roads, generating a three-dimensional model of a garden, and performing texture mapping and light and shadow parameter setting according to the high-definition images to obtain a preliminary three-dimensional model; leading the preliminary three-dimensional model into a Unity platform, adding sky, ground and illumination effects, and adjusting spatial parameters to obtain the three-dimensional model; wherein the spatial parameters comprise a three-dimensional position, a rotation angle and a size.
According to the park digital roaming display method based on multi-source information fusion, after the three-dimensional model is obtained, the method further comprises the following steps: and adding a CameraRIG component at a preset position of the three-dimensional model virtual scene, and adjusting to be matched with the size of the model in the scene.
According to the park digital roaming display method based on multi-source information fusion, before the lower limb tracking of the user is performed through the sensor in the universal treadmill, the method further comprises the following steps: adding a role control prefabricated body of the universal treadmill in a scene of the three-dimensional model, and associating the universal treadmill with the VR gloves; the real-time simulation of the walking action of the user in the virtual scene is realized by adding a preset motion Component script file and setting the maximum moving speed and the gravity sensing value parameters of the user in the virtual scene.
According to the park digital roaming display method based on multi-source information fusion, the upper limb tracking is carried out through VR gloves worn by a user, and the method comprises the following steps: monitoring the action of the VR glove according to the touch event script based on the VR glove interaction item script and the collision detection component mounted on the interaction object in the three-dimensional model scene; based on VR glove interaction item scripts and collision detection components mounted on interaction objects in a three-dimensional model scene, monitoring VR glove actions according to set grabbing levels and grabbing event scripts.
According to the park digital roaming display method based on multi-source information fusion, provided by the embodiment of the invention, the interactive object is an interactive button, and the VR glove actions are monitored according to the touch event script based on the VR glove interactive item script and the collision detection component mounted on the interactive object in the three-dimensional model scene, and the method comprises the following steps: monitoring the action of the VR glove according to the set click event script based on the VR glove key interface script and the collision detection component mounted on the UI interaction button; correspondingly, under the condition that the virtual finger corresponding to the VR glove clicks the interactive button, displaying information corresponding to the interactive button is executed, wherein the displayed information comprises a two-dimensional picture, a video and a text introduction of the park.
The invention also provides a park digital roaming display system based on multi-source information fusion, which comprises: the lower limb tracking module is used for tracking the lower limbs of the user through a sensor in the universal treadmill, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in the virtual scene of the three-dimensional model of the garden according to the moving speed and direction; and the upper limb tracking module is used for tracking the upper limbs through the VR gloves worn by the user, displaying the touch effect according to preset interaction information under the condition that the user is obtained to touch the interaction object through the VR gloves, or performing corresponding virtual farming operation on the interaction object according to the operation of the user under the condition that the user is obtained to grab the interaction object through the VR gloves.
The invention also provides electronic equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of any one of the above-mentioned park digital roaming display methods based on multi-source information fusion.
The present invention also provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the above-mentioned method for digital exhibition of a campus based on multi-source information fusion.
According to the campus digital roaming display method and system based on multi-source information fusion, the lower limbs of the user are tracked through the sensor in the universal running machine, the upper limbs of the user are tracked through the VR gloves worn by the user, the campus information of the user is displayed, the virtual farming operation is executed, the user is taken as a basic starting point, the user can roam and interact in the virtual campus personally, the roaming interest of the user is stimulated, the participation degree of the user is improved, good user experience is created, and the user can better know the development and the operation condition of the campus.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of a park digital roaming display method based on multi-source information fusion provided by the invention;
FIG. 2 is a block diagram of a park digital roaming display system based on multi-source information fusion provided by the present invention;
FIG. 3 is a schematic structural diagram of a park digital roaming display system based on multi-source information fusion provided by the invention;
fig. 4 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following describes the park digital roaming display method and system based on multi-source information fusion in combination with fig. 1-4. Fig. 1 is a schematic flow chart of a park digital roaming display method based on multi-source information fusion, as shown in fig. 1, the park digital roaming display method based on multi-source information fusion, provided by the present invention, includes:
101. the method comprises the steps of tracking the lower limbs of a user through a sensor in the universal treadmill, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in a virtual scene of a three-dimensional model of a garden according to the moving speed and direction.
102. Carry out the upper limbs through the VR gloves that the user wore and track, under the condition of obtaining that the user passes through the VR gloves to the operation of mutual object for the touching, carry out the show of touching effect according to preset mutual information, perhaps under the condition of obtaining that the user passes through the operation of VR gloves to mutual object for snatching, carry out corresponding virtual farming operation to mutual object according to user's operation.
The technical scheme of the invention mainly comprises a virtual scene module and a roaming interaction module. The virtual scene module is constructed based on the Unity engine to complete the construction of the virtual park scene.
In consideration of the digital display process of the agricultural park, gesture action information of a user is collected through gesture capture equipment to complete corresponding interactive operation or farming operation. The method generally uses a Leap Motion device to collect gesture actions of a user, customize an action analysis module, and identify actions of the user such as clicking, grabbing and touching. However, the Leap Motion has a collection range of 25 mm to 600 mm, and both hands are required to be within a detectable range of the device, which brings inconvenience in a virtual farming operation process. Moreover, the device cannot provide tactile feedback, and a user cannot sense whether to trigger corresponding virtual operation, so that the gesture motion of the user cannot be captured finely.
In the invention, interaction is realized by combining VR gloves. Specifically, the roaming interaction module is implemented based on VR headset, gimbaled treadmill, and VR gloves, such as HTC Vive, Virtuix Omni, and Noitom Hi5, respectively. Based on HTC Vive SDK, Virtuix Omni SDK and Noitom Hi5 SDK, user operation is customized through script design, data collection and response are carried out, and action simulation of a user in a virtual scene is completed.
Through Virtuix Omni equipment, the range of motion of the user is fixed, the problem of potential safety hazards is solved, user motion data are collected through universal tracking equipment, the walking motion of the user is truly simulated, and the user can walk and run freely in 360 degrees in all directions in a virtual environment. Space limitation is broken through the Noitom Hi5 gesture capture device, user gesture actions are collected at high precision, and real farming operation is simulated.
According to the campus digital roaming display method based on multi-source information fusion, the lower limb tracking of a user is performed through the sensor in the universal running machine, the upper limb tracking is performed through the VR gloves worn by the user, the campus information is displayed for the user, the virtual farming operation is executed, the user is taken as a basic starting point, the user can roam and interact in the virtual campus personally, the roaming interest of the user is stimulated, the participation degree of the user is improved, good user experience is created, and the user can better know the development and the operation condition of the campus.
In one embodiment, the method further comprises: acquiring real-time environment data of a real park acquired by a sensor from a background database, and associating interaction points of the real-time environment data in virtual sensors at corresponding positions of a scene respectively; and the associated interaction points are used for displaying according to the real-time environment data or directly displaying the real-time environment data at the virtual sensor interaction points under the condition that the user touches the associated virtual sensor interaction points through the VR gloves.
In an agricultural campus, there are many sensors and intelligent utility devices through which the intelligent and modernized nature of the campus can be presented. However, in the existing display system, the emphasis is on virtual roaming of the park and display of park scenes, display of multi-source agricultural data is omitted, a user cannot deeply know the development condition and the concept of the park, and the display content is single.
Further, the technical scheme provided by the embodiment of the invention comprises a virtual scene module, a roaming interaction module and a big data display module. The big data display module is completed by a UGUI XChart component, and sensor data is analyzed through JSON to realize multi-source fusion and dynamic display of the data.
Firstly, analyzing JSON data, analyzing agricultural data (air temperature, air humidity, soil temperature, illumination intensity and the like) collected by a real park sensor through JSON, acquiring a data stream to be displayed, and transmitting the data stream to a background database for storage.
Secondly, visualizing the data, calling agricultural data stored in a background database through a UGUI XChart component, and displaying the data in a virtual scene in the forms of a line graph, a bar graph, a pie chart, a radar chart and the like through script design. The user opens the UI panel through wearing HTC Vive equipment based on the menu interaction function of the roaming interaction module, looks over the agricultural data in real time, and knows the intelligent development of garden more intuitively. And the virtual sensor interaction point can be directly displayed, and the user can browse the corresponding sensor position by roaming.
According to the park digital roaming display method based on multi-source information fusion, the park sensor data are dynamically and visually displayed in the virtual environment based on the big data visualization technology, the fusion display of the multi-source data is achieved, the park current situation is more comprehensively and truly displayed, and the roaming immersion and participation of users are improved.
In one embodiment, before the lower limb tracking of the user by the sensor in the universal treadmill, the method further comprises: acquiring high-definition images of garden buildings and roads, generating a three-dimensional model of a garden, and performing texture mapping and light and shadow parameter setting according to the high-definition images to obtain a preliminary three-dimensional model; leading the preliminary three-dimensional model into a Unity platform, adding sky, ground and illumination effects, and adjusting spatial parameters to obtain the three-dimensional model; wherein the spatial parameters comprise a three-dimensional position, a rotation angle and a size.
Specifically, the virtual scene module implements the following procedures:
to garden data acquisition, can carry out the shooting of outdoor scene through high definition camera, unmanned aerial vehicle to the different building structure in garden, building distribution, road trend, architectural style, obtain high quality photo.
For three-dimensional model modeling, a building model can be manufactured by 3ds max software according to the photo materials acquired by the process, and texture mapping and light and shadow design are carried out.
For the construction of the virtual scene, a Unity platform can be introduced into the three-dimensional model manufactured by the process, special effects such as virtual sky, ground, illumination and the like are added into the virtual scene, and the reality of the environment is increased. In the Unity platform, parameters such as the three-dimensional position, the rotation angle, the size of the dimension and the like of the model are adjusted, the equal-proportion simulation of the real park is finally realized, and a vivid virtual park scene is constructed.
In one embodiment, after obtaining the three-dimensional model, the method further includes: and adding a CameraRIG component at a preset position of the three-dimensional model virtual scene, and adjusting to be matched with the size of the model in the scene.
For VR scene construction, in the virtual scene constructed by the process, SteamVR plug and Noitom Hi5 Unity SDK are introduced, a [ CameraRIG ] _ Hi5 component is added in the scene and is placed at a preset position of the virtual scene, and the size of the virtual scene is adjusted to be matched with the size of the model in the scene. The sub-object camera (head) comprised by the assembly [ camerarigjhi 5 acts as a VR camera. When the head of a user wears the HTC Vive and the hands of the user wear Noitom Hi5 gloves, the user can enter a virtual scene, the user can independently observe the scene at 360 degrees and can freely operate the hands at will, and the virtual double-hand model in the scene can simulate hand motion data in real time.
In one embodiment, before the lower limb tracking of the user by the sensor in the universal treadmill, the method further comprises: adding a role control prefabricated body of the universal treadmill in a scene of the three-dimensional model, and associating the universal treadmill with the VR gloves; the real-time simulation of the walking action of the user in the virtual scene is realized by adding a preset motion Component script file and setting the maximum moving speed and the gravity sensing value parameters of the user in the virtual scene.
Specifically, for walking motion simulation, Omni SDK is introduced into the built VR scene, and tracking simulation of the user limb motion by the scene is completed. Adding an [ Omni Character Controller ] preform (namely a role control preform) in the scene, and assigning [ Camera Alig ] _ Hi5 in the scene to a [ Camera Reference ] variable (realizing the association of the universal treadmill and the VR glove). By adding the [ Omni motion Component ] script file, the parameters such as the maximum moving speed, the gravity induction value and the like of the user in the virtual scene are set, and the real-time simulation of the walking actions such as the forward Movement, the backward Movement, the transverse Movement and the like of the user in the virtual scene is realized.
In the configured VR scene, the Hi5_ Interaction _ SDK is imported to complete the customization of the Interaction operation.
In one embodiment, the upper limb tracking by a VR glove worn by a user comprises: monitoring the action of the VR glove according to the touch event script based on the VR glove interaction item script and the collision detection component mounted on the interaction object in the three-dimensional model scene; based on VR glove interaction item scripts and collision detection components mounted on interaction objects in a three-dimensional model scene, monitoring VR glove actions according to set grabbing levels and grabbing event scripts.
For the virtual interactive response, in the VR scenario configured in the above embodiment, the Hi5_ Interaction _ SDK is imported to complete customization of the interactive operation.
And (3) object touch, mounting a VR Glove Interaction Item script such as a Hi5_ Glove _ Interaction _ Item script on an interactive object, adding a collision detection Collider component, and monitoring through a custom touch event script. When the virtual double hands touch the interactive object, event monitoring feedback is acquired, whether the operation is double-hand touch operation is judged, corresponding reaction is made, and the effect that the user touches the object in a real environment is simulated.
And (3) object grabbing, namely mounting a Hi5_ Glove _ Interaction _ Item script on an interactive object, adding a collision detection Collider component, setting a grabbing Grasp level, and monitoring through a custom grabbing event script. When the virtual double hands grab the interactive object, the event monitoring feedback is acquired, whether the double-hand grabbing operation is performed or not is judged, corresponding reaction is made, the effect that the user grabs the object in the real environment is simulated, and the user is guided to perform corresponding virtual farming operation.
In one embodiment, the interacting object is an interacting button, and the monitoring of the VR glove actions according to the touch event script based on the VR glove interacting item script and the collision detection component mounted on the interacting object in the three-dimensional model scene includes: monitoring the action of the VR glove according to the set click event script based on the VR glove key interface script and the collision detection component mounted on the UI interaction button; executing display of information corresponding to the interactive button under the condition that the virtual finger corresponding to the VR glove clicks the interactive button; the displayed information includes the two-dimensional picture, video and text introduction of the park.
For the display of menu interaction, a VR glove key interface script, such as a Hi5_ Interace _ Button script, is mounted on a UI interaction Button, a collision detection Collider component is added, and monitoring is performed by self-defining a click event script. When the virtual finger clicks the interactive button, event monitoring feedback is obtained, whether the operation is finger clicking operation is judged, corresponding reaction is made, the effect that the user clicks the button in a real environment is simulated, and the purpose that the user checks two-dimensional pictures, videos, text introduction and the like of the garden in a virtual scene is achieved.
Fig. 2 is a frame diagram of a campus digital roaming display system based on multi-source information fusion according to the present invention, which can be referred to in conjunction with the above embodiments, and the VR system operation flow based on the above embodiments is as follows:
(1) a user starts a roaming experience system;
(2) and (5) wearing the low-friction shoes special for Omni equipment, entering an Omni operating platform, and wearing the movable waist ring. (the waist ring fixes the user in the operating table area, the user can freely move in the operating table area by 360 degrees, and the user can be prevented from falling down or walking out of the induction area);
(3) the user wears HTC Vive head-mounted equipment and wears Noitom Hi5 VR gloves. Ready for virtual park roaming.
(4) - (6) operation enables autonomous virtual roaming and free interaction of users in multiple zones, multiple buildings, multiple interaction points of the campus, illustrated by building a.
(4) The user walks: the user is facing building a in a virtual environment. In the console, the user walks in place, but in the virtual scene, the user is walking towards building a. (the user walks in place in the area of the console, changing the location of the user's movement in the virtual environment, like the user running on a treadmill device Omni device tracks the user's walking motion and determines whether the user is walking or running based on the user's speed of movement)
(5) User interaction: after the user arrives at building a, building a contains an interaction point. The user touches the interactive response (virtual interactive response-object touch) through the VR glove, touches the interactive object, and pops up the UI menu introduction of building a. Then, through VR glove menu interactive response (virtual interactive response-menu interaction), the introduction of pictures, characters and the like of the building A is checked. The user can carry out virtual farming operation through the grabbing response (virtual interactive response-object grabbing) of the VR gloves, and the farming activities of picking fruits, watering and the like of the user are simulated.
(6) And (3) displaying big data: after the user arrives at building a, building a contains a virtual sensor model. The user touches the sensor model through the touch interactive response (virtual interactive response-object touch) of the VR gloves, pops up corresponding environmental data, can independently select different display modes, and carries out the dynamic display of the multiple sources of agricultural real-time data.
(7) The user exits the virtual roaming system.
The park digital roaming display system based on multi-source information fusion provided by the invention is described below, and the park digital roaming display system based on multi-source information fusion described below and the park digital roaming display method based on multi-source information fusion described above can be referred to correspondingly.
Fig. 3 is a schematic structural diagram of the campus digital roaming display system based on multi-source information fusion, as shown in fig. 3, the campus digital roaming display system based on multi-source information fusion includes: a lower limb tracking module 301 and an upper limb tracking module 302. The lower limb tracking module 301 is configured to track the lower limbs of the user through a sensor in the universal treadmill, determine a corresponding movement speed and direction of the user, and determine a movement state of the user in a virtual scene of the three-dimensional model of the campus according to the movement speed and direction; the upper limb tracking module 302 is used for tracking upper limbs through the VR gloves worn by the user, and when the user is in touch with the interactive object through the VR gloves, the touch effect is displayed according to preset interactive information, or when the user is in grabbing with the interactive object through the VR gloves, the corresponding virtual farming operation is performed on the interactive object according to the user operation.
The system embodiment provided in the embodiments of the present invention is for implementing the above method embodiments, and for details of the process and the details, reference is made to the above method embodiments, which are not described herein again.
According to the campus digital roaming display system based on multi-source information fusion, provided by the embodiment of the invention, the lower limb tracking of a user is carried out through the sensor in the universal running machine, the upper limb tracking is carried out through the VR gloves worn by the user, so that the campus information of the user is displayed, the virtual farming operation is executed, the user is taken as a basic starting point, and the user can roam and interact in a virtual campus personally on the scene, so that the roaming interest of the user is stimulated, the participation degree of the user is improved, good user experience is created, and the user can better know the development and operation condition of the campus.
Fig. 4 is a schematic structural diagram of an electronic device provided in the present invention, and as shown in fig. 4, the electronic device may include: a processor (processor)401, a communication Interface (communication Interface)402, a memory (memory)403 and a communication bus 404, wherein the processor 401, the communication Interface 402 and the memory 403 complete communication with each other through the communication bus 404. The processor 401 may call logic instructions in the memory 403 to execute a campus digital roaming exhibition method based on multi-source information fusion, the method including: the method comprises the steps that a sensor in the universal treadmill is used for tracking the lower limbs of a user, the corresponding moving speed and direction of the user are determined, and the moving state of the user in a virtual scene of a three-dimensional model of a garden is determined according to the moving speed and direction; carry out the upper limbs through the VR gloves that the user wore and track, under the condition of obtaining that the user passes through the VR gloves to the operation of mutual object for the touching, carry out the show of touching effect according to preset mutual information, perhaps under the condition of obtaining that the user passes through the operation of VR gloves to mutual object for snatching, carry out corresponding virtual farming operation to mutual object according to user's operation.
In addition, the logic instructions in the memory 403 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention further provides a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer being capable of executing the park digital roaming exhibition method based on multi-source information fusion provided by the above methods, the method including: the method comprises the steps that a sensor in the universal treadmill is used for tracking the lower limbs of a user, the corresponding moving speed and direction of the user are determined, and the moving state of the user in a virtual scene of a three-dimensional model of a garden is determined according to the moving speed and direction; carry out the upper limbs through the VR gloves that the user wore and track, under the condition of obtaining that the user passes through the VR gloves to the operation of mutual object for the touching, carry out the show of touching effect according to preset mutual information, perhaps under the condition of obtaining that the user passes through the operation of VR gloves to mutual object for snatching, carry out corresponding virtual farming operation to mutual object according to user's operation.
In yet another aspect, the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented by a processor to execute the method for digital exhibition of campus based on multi-source information fusion provided in the foregoing embodiments, and the method includes: the method comprises the steps that a sensor in the universal treadmill is used for tracking the lower limbs of a user, the corresponding moving speed and direction of the user are determined, and the moving state of the user in a virtual scene of a three-dimensional model of a garden is determined according to the moving speed and direction; carry out the upper limbs through the VR gloves that the user wore and track, under the condition of obtaining that the user passes through the VR gloves to the operation of mutual object for the touching, carry out the show of touching effect according to preset mutual information, perhaps under the condition of obtaining that the user passes through the operation of VR gloves to mutual object for snatching, carry out corresponding virtual farming operation to mutual object according to user's operation.
The above-described system embodiments are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A park digital roaming display method based on multi-source information fusion is characterized by comprising the following steps:
the method comprises the steps that a sensor in the universal treadmill is used for tracking the lower limbs of a user, the corresponding moving speed and direction of the user are determined, and the moving state of the user in a virtual scene of a three-dimensional model of a garden is determined according to the moving speed and direction;
carry out the upper limbs through the VR gloves that the user wore and track, under the condition of obtaining that the user passes through the VR gloves to the operation of mutual object for the touching, carry out the show of touching effect according to preset mutual information, perhaps under the condition of obtaining that the user passes through the operation of VR gloves to mutual object for snatching, carry out corresponding virtual farming operation to mutual object according to user's operation.
2. The method for digitally displaying the campus based on the multi-source information fusion as claimed in claim 1, wherein the method further comprises:
acquiring real-time environment data of a real park acquired by a sensor from a background database, and associating interaction points of the real-time environment data in virtual sensors at corresponding positions of a scene respectively;
and the associated interaction points are used for displaying according to the real-time environment data or directly displaying the real-time environment data at the virtual sensor interaction points under the condition that the user touches the associated virtual sensor interaction points through the VR gloves.
3. The park digital roaming display method based on multi-source information fusion of claim 1, wherein before lower limb tracking of the user is performed by a sensor in the universal treadmill, the method further comprises:
acquiring high-definition images of garden buildings and roads, generating a three-dimensional model of a garden, and performing texture mapping and light and shadow parameter setting according to the high-definition images to obtain a preliminary three-dimensional model;
leading the preliminary three-dimensional model into a Unity platform, adding sky, ground and illumination effects, and adjusting spatial parameters to obtain the three-dimensional model;
wherein the spatial parameters comprise a three-dimensional position, a rotation angle and a size.
4. The park digital roaming display method based on multi-source information fusion of claim 3, wherein after the obtaining the three-dimensional model, the method further comprises:
and adding a CameraRIG component at a preset position of the three-dimensional model virtual scene, and adjusting to be matched with the size of the model in the scene.
5. The park digital roaming display method based on multi-source information fusion of claim 1, wherein before lower limb tracking of the user is performed by a sensor in the universal treadmill, the method further comprises:
adding a role control prefabricated body of the universal treadmill in a scene of the three-dimensional model, and associating the universal treadmill with the VR gloves;
the real-time simulation of the walking action of the user in the virtual scene is realized by adding a preset motion Component script file and setting the maximum moving speed and the gravity sensing value parameters of the user in the virtual scene.
6. The park digital roaming display method based on multi-source information fusion of claim 1, wherein the upper limb tracking through VR gloves worn by users includes:
monitoring the action of the VR glove according to the touch event script based on the VR glove interaction item script and the collision detection component mounted on the interaction object in the three-dimensional model scene;
based on VR glove interaction item scripts and collision detection components mounted on interaction objects in a three-dimensional model scene, monitoring VR glove actions according to set grabbing levels and grabbing event scripts.
7. The multi-source information fusion-based park digital roaming display method of claim 6, wherein the interaction object is an interaction button, and the monitoring of the VR glove actions according to the touch event script based on the VR glove interaction item script and the collision detection component mounted on the interaction object in the three-dimensional model scene comprises:
monitoring the action of the VR glove according to the set click event script based on the VR glove key interface script and the collision detection component mounted on the UI interaction button;
correspondingly, under the condition that the virtual finger corresponding to the VR glove clicks the interactive button, displaying information corresponding to the interactive button is executed, wherein the displayed information comprises a two-dimensional picture, a video and a text introduction of the park.
8. The utility model provides a park digital roaming display system based on multisource information fusion which characterized in that includes:
the lower limb tracking module is used for tracking the lower limbs of the user through a sensor in the universal treadmill, determining the corresponding moving speed and direction of the user, and determining the moving state of the user in the virtual scene of the three-dimensional model of the garden according to the moving speed and direction;
and the upper limb tracking module is used for tracking the upper limbs through the VR gloves worn by the user, displaying the touch effect according to preset interaction information under the condition that the user is obtained to touch the interaction object through the VR gloves, or performing corresponding virtual farming operation on the interaction object according to the operation of the user under the condition that the user is obtained to grab the interaction object through the VR gloves.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the steps of the method for digital exhibition of campus roaming based on multi-source information fusion according to any one of claims 1 to 7.
10. A non-transitory computer readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of the campus digital roaming exhibition method based on multi-source information fusion according to any one of claims 1 to 7.
CN202111146316.8A 2021-09-28 2021-09-28 Park digital roaming display method and system based on multi-source information fusion Active CN114020978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111146316.8A CN114020978B (en) 2021-09-28 2021-09-28 Park digital roaming display method and system based on multi-source information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111146316.8A CN114020978B (en) 2021-09-28 2021-09-28 Park digital roaming display method and system based on multi-source information fusion

Publications (2)

Publication Number Publication Date
CN114020978A true CN114020978A (en) 2022-02-08
CN114020978B CN114020978B (en) 2024-06-11

Family

ID=80055003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111146316.8A Active CN114020978B (en) 2021-09-28 2021-09-28 Park digital roaming display method and system based on multi-source information fusion

Country Status (1)

Country Link
CN (1) CN114020978B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494667A (en) * 2022-02-21 2022-05-13 北京华建云鼎科技股份公司 Data processing system and method for adding crash box

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106601062A (en) * 2016-11-22 2017-04-26 山东科技大学 Interactive method for simulating mine disaster escape training
CN108919940A (en) * 2018-05-15 2018-11-30 青岛大学 A kind of Virtual Campus Cruise System based on HTC VIVE
CN109067822A (en) * 2018-06-08 2018-12-21 珠海欧麦斯通信科技有限公司 The real-time mixed reality urban service realization method and system of on-line off-line fusion
US20200047074A1 (en) * 2017-11-17 2020-02-13 Tencent Technology (Shenzhen) Company Limited Role simulation method and terminal apparatus in vr scene
CN111667560A (en) * 2020-06-04 2020-09-15 成都飞机工业(集团)有限责任公司 Interaction structure and interaction method based on VR virtual reality role
CN111694426A (en) * 2020-05-13 2020-09-22 北京农业信息技术研究中心 VR virtual picking interactive experience system, method, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106601062A (en) * 2016-11-22 2017-04-26 山东科技大学 Interactive method for simulating mine disaster escape training
US20200047074A1 (en) * 2017-11-17 2020-02-13 Tencent Technology (Shenzhen) Company Limited Role simulation method and terminal apparatus in vr scene
CN108919940A (en) * 2018-05-15 2018-11-30 青岛大学 A kind of Virtual Campus Cruise System based on HTC VIVE
CN109067822A (en) * 2018-06-08 2018-12-21 珠海欧麦斯通信科技有限公司 The real-time mixed reality urban service realization method and system of on-line off-line fusion
CN111694426A (en) * 2020-05-13 2020-09-22 北京农业信息技术研究中心 VR virtual picking interactive experience system, method, electronic equipment and storage medium
CN111667560A (en) * 2020-06-04 2020-09-15 成都飞机工业(集团)有限责任公司 Interaction structure and interaction method based on VR virtual reality role

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王雷;王晓原;刘智平;刘海红;: "面向驾驶员行为研究的基础理论――多源信息融合算法综述", 交通标准化, no. 01, 15 January 2007 (2007-01-15) *
陈艳芳;刘海鹏;: "基于虚拟技术的核电站应急辅助系统的开发", 核安全, no. 06, 30 December 2019 (2019-12-30) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494667A (en) * 2022-02-21 2022-05-13 北京华建云鼎科技股份公司 Data processing system and method for adding crash box

Also Published As

Publication number Publication date
CN114020978B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
US10269089B2 (en) Multi-user virtual reality processing
CN106873778B (en) Application operation control method and device and virtual reality equipment
US9842433B2 (en) Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality
CN110515455B (en) Virtual assembly method based on Leap Motion and cooperation in local area network
US10600253B2 (en) Information processing apparatus, information processing method, and program
CN111191322B (en) Virtual maintainability simulation method based on depth perception gesture recognition
CN102789313A (en) User interaction system and method
EP2681638A1 (en) Real-time virtual reflection
CN112198959A (en) Virtual reality interaction method, device and system
EP3557375B1 (en) Information processing device, information processing method, and program
CN107844195B (en) Intel RealSense-based development method and system for virtual driving application of automobile
Capece et al. Graphvr: A virtual reality tool for the exploration of graphs with htc vive system
CN108595004A (en) More people's exchange methods, device and relevant device based on Virtual Reality
CN115337634A (en) VR (virtual reality) system and method applied to meal games
CN103785169A (en) Mixed reality arena
CN106125927B (en) Image processing system and method
CN114020978B (en) Park digital roaming display method and system based on multi-source information fusion
CN114281190A (en) Information control method, device, system, equipment and storage medium
CN109643182B (en) Information processing method and device, cloud processing equipment and computer program product
CN112241198A (en) Method and device for realizing augmented reality scene and storage medium
Zhang et al. Virtual navigation considering user workspace: Automatic and manual positioning before teleportation
CN202003298U (en) Three-dimensional uncalibrated display interactive device
CN115861496A (en) Power scene virtual human body driving method and device based on dynamic capture system
JP2018190196A (en) Information processing method, information processing device, program causing computer to execute information processing method
US20240193894A1 (en) Data processing method and apparatus, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant