CN115562474A - Virtual environment and real scene fusion display system - Google Patents

Virtual environment and real scene fusion display system Download PDF

Info

Publication number
CN115562474A
CN115562474A CN202210176252.4A CN202210176252A CN115562474A CN 115562474 A CN115562474 A CN 115562474A CN 202210176252 A CN202210176252 A CN 202210176252A CN 115562474 A CN115562474 A CN 115562474A
Authority
CN
China
Prior art keywords
virtual
scene
map
real
virtual character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210176252.4A
Other languages
Chinese (zh)
Inventor
许如兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Huizai Exhibition Display Co ltd
Original Assignee
Shanghai Huizai Exhibition Display Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Huizai Exhibition Display Co ltd filed Critical Shanghai Huizai Exhibition Display Co ltd
Priority to CN202210176252.4A priority Critical patent/CN115562474A/en
Publication of CN115562474A publication Critical patent/CN115562474A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The invention relates to the technical field of virtual reality, in particular to a display system fusing a virtual environment and a real scene. The system specifically comprises a real scene modeling module, a virtual scene modeling module, a sensor fusion module and a controller terminal module. The invention establishes a display system capable of carrying out virtual reality interaction, establishes a grid map through a laser radar, combines with a 3D projector to establish a three-dimensional virtual environment, carries out data fusion and coordinate system alignment, is used for carrying out simple motion interaction with a virtual character projected out from the virtual environment after a user walks into the three-dimensional virtual environment, establishes an obstacle avoidance method based on dynamic constraint, is used for carrying out obstacle avoidance type local path planning on the motion data of the virtual character, and when the virtual character meets a tourist, the virtual character self-plans a local obstacle avoidance path according to the established constraint condition, and continues to advance according to a specified destination after the tourist is avoided, thereby forming a virtual-real combined real interaction system.

Description

Virtual environment and real scene fusion display system
Technical Field
The invention relates to the technical field of virtual reality, and the IPC classification number is as follows: G06Q30/02, in particular to a display system fusing a virtual environment and a real scene.
Background
At the present stage, with the gradual development of virtual reality and augmented reality, a mode of fusing a virtual environment and a real environment scene gradually becomes a hot spot for young people to chase, and meanwhile, the virtual reality is widely applied to simulation operations of various industries, so that the virtual reality brings immersive experience and sensory stimulation to people, but with the long-term development, the virtual reality only stays in a rendering stage of a real environment, but cannot meet the better requirement of interaction between people and the virtual environment, and cannot bring more real interactive experience to people.
Patent CN201711136290 provides a method and system for interaction in a virtual reality exhibition hall, in which by establishing a virtual exhibition hall and a client port, when a client enters the virtual exhibition hall for some reason, a worker can adjust the objects in the virtual exhibition hall to interact with the client according to the position watched by the eye of the client.
Patent CN202011431772 provides an exhibition hall interaction method of virtual reality and augmented reality, a virtual system is built through a server and environment modeling is performed, a user interacts with a background worker through voice and text information, and the background worker changes an object in a virtual environment according to interaction information, so that a human-computer interaction mode is realized.
However, the above patents manually control the virtual exhibition hall to interact with the virtual environment through the staff, the staff in the patent 1 cannot really know the idea and the requirement of the user, interaction confusion will be caused, the real idea of the user is misunderstood, meanwhile, the interaction between the user and the virtual environment is not really realized through manual control, and the essence of the patent 2 is also the interaction between the user and the person, so that the real viewing experience of the user cannot be brought.
Therefore, aiming at the problems existing in the existing interactive system of the virtual scene and the real scene, the display system integrating the virtual environment and the real scene is urgently needed to be provided, the acquired tourist motion information and the environmental information in the virtual scene are directly associated by establishing a data acquisition data integration mode based on multiple sensors, the interaction between people and the virtual scene is realized, and the interactive experience of tourists is more real.
Disclosure of Invention
Aiming at the existing problems, the invention provides a display system for fusing a virtual environment and a real scene, which specifically comprises a real scene modeling module, a virtual scene modeling module, a sensor fusion module and a controller terminal module; the real scene modeling module establishes a real scene map based on a display site; the virtual scene modeling module establishes a virtual environment based on 3D projection in a display field; the sensor fusion module collects information recorded by a sensor in a scene and fuses the information; the controller terminal module controls the operation and the opening of each sensor, the acousto-optic device and the electric control device in the display field; when the tourists walk in the real-scene map of the display site, the virtual character in the virtual environment actively avoids the tourists through constraint limitation.
Preferably, the real scene modeling module adopts a 2D laser radar to establish a two-dimensional point cloud map in a specified exhibition area and convert the point cloud map into a grid map.
Preferably, the virtual scene modeling module adopts a 3D projector to establish a 3D re-projection map on the grid map.
Preferably, a grid map-based coordinate mapping method is established in the 3D re-projection map, and the 3D re-projection map is aligned with the coordinates of the grid map.
Preferably, the virtual scene modeling module establishes a three-dimensional virtual scene model, and establishes a world coordinate system with position information in the three-dimensional virtual scene model.
Preferably, the world coordinate system calculates virtual character motion data in the three-dimensional virtual scene model and transmits the virtual character motion data to the sensor fusion module.
Preferably, after receiving the virtual character motion data, the sensor fusion module performs data fusion on the virtual character motion data and the grid map, and uses the virtual character motion data and the grid map as a dynamic obstacle in the grid map.
Preferably, the virtual character movement data specifies a start position and an end position of the virtual character.
Preferably, in the scene interactive display system, the tourist moves slowly in the scene after standing at the designated position, and records the position information on the grid map after recording the movement position data of the tourist through the 2D laser radar.
Preferably, the virtual character movement data establishes an obstacle avoidance method based on dynamic constraint relative to the tourist movement position data; according to the obstacle avoidance method based on dynamic constraint, when a virtual character meets a tourist, a local path is re-planned to bypass the tourist and then continuously move to the end position.
Compared with the prior art, the invention has the beneficial effects that:
(1) The invention establishes a display system capable of carrying out virtual reality interaction, a grid map is established through a laser radar, a three-dimensional virtual environment is established by combining with a 3D projector, data fusion and coordinate system alignment are carried out, so that after a user walks into the three-dimensional virtual environment, simple motion interaction is carried out on a virtual character projected out from the virtual environment, and when a tourist meets the virtual character, the virtual character can avoid by oneself.
(2) The invention establishes an obstacle avoidance method based on dynamic constraint, which is used for carrying out obstacle avoidance type local path planning on virtual character motion data, when a virtual character meets a tourist, the virtual character plans a local obstacle avoidance path according to the established constraint condition, and after the tourist is avoided, the virtual character continues to advance according to a specified destination, so that a virtual-real combined real interactive system is formed.
Drawings
Fig. 1 is a block diagram of a display system in which a virtual environment and a real scene are merged.
Detailed Description
A display system for fusion of a virtual environment and a real scene specifically comprises a real scene modeling module, a virtual scene modeling module, a sensor fusion module and a controller terminal module; the real scene modeling module establishes a real scene map based on a display site; the virtual scene modeling module establishes a virtual environment based on 3D projection in a display field; the sensor fusion module acquires and fuses information recorded by the sensors in the scene; the controller terminal module controls the operation and the opening of each sensor, the acousto-optic device and the electric control device in the display field; when the tourists walk in the real-scene map of the display site, the virtual character in the virtual environment actively avoids the tourists through constraint limitation.
In one embodiment, the real scene modeling module adopts a 2D laser radar to establish a two-dimensional point cloud map in a specified exhibition area and convert the point cloud map into a grid map.
In a preferred embodiment, a world coordinate system is established in the grid map by taking a display site as a reference, physical coordinates of tourists in the map are generated in the world coordinate system, and a time stamp of the tourists in the moving process in the scene and pixel coordinates of the map are recorded.
In one embodiment, the virtual scene modeling module uses a 3D projector to build a 3D re-projection map on the grid map.
In a preferred embodiment, the 3D projector projects the three-dimensional virtual environment animation simulation model established by the controller terminal module into the display field, and the 3D projector is erected above the display field.
In one embodiment, a grid map-based coordinate mapping method is established in the 3D re-projection map, and the 3D re-projection map is aligned with the grid map.
Specifically, the coordinate alignment comprises distortion-removing projection transformation of the 3D re-projection map, the distortion-removing projection transformation firstly calibrates the 3D re-projection map, and on the basis, a division mode based on triangular meshes is carried out, meanwhile, coordinate points of a triangular table are adjusted to correct the 3D re-projection position, and the coordinate system of the corrected 3D re-projection map is rotated and subjected to size transformation, so that the alignment with the coordinate system of the grid map is realized.
In one embodiment, the virtual scene modeling module establishes a three-dimensional virtual scene model, and establishes a world coordinate system with position information in the three-dimensional virtual scene model.
In one embodiment, the world coordinate system calculates virtual character motion data in the three-dimensional virtual scene model and transmits the virtual character motion data to the sensor fusion module.
In one embodiment, the sensor fusion module performs data fusion on the virtual character motion data and the grid map after receiving the virtual character motion data, and the virtual character motion data and the grid map are used as dynamic obstacles in the grid map.
In one embodiment, the avatar movement data specifies a starting position and an ending position of the avatar.
In a preferred embodiment, the virtual character performs global path planning along the starting position and the ending position, when encountering a guest, performs local path planning based on the global path planning, the local path planning is real-time planning, generates path information for hiding the guest in a local area, and continues to perform global path planning movement along the ending position after hiding the guest.
In one embodiment, in the scene interactive display system, after a visitor stands at a designated position, the visitor slowly moves in the scene, and after the movement position data of the visitor is recorded through a 2D laser radar, the position information is recorded on a grid map.
In one embodiment, the virtual character movement data establishes an obstacle avoidance method based on dynamic constraint relative to the tourist movement position data; according to the obstacle avoidance method based on dynamic constraint, when a virtual character meets a tourist, a local path is re-planned to bypass the tourist and then continuously move to the end position.
In a preferred embodiment, the scene interaction presentation system, wherein the dynamic barrier records only the virtual character movement data as a point value on the grid map, and the guest records only the real virtual character movement data as a point value on the grid map without monitoring the limb movement.

Claims (10)

1. A display system for fusion of a virtual environment and a real scene is characterized by specifically comprising a real scene modeling module, a virtual scene modeling module, a sensor fusion module and a controller terminal module; the real scene modeling module establishes a real scene map based on a display site; the virtual scene modeling module establishes a virtual environment based on 3D projection in a display field; the sensor fusion module acquires and fuses information recorded by the sensors in the scene; the controller terminal module controls the operation and the opening of each sensor, the acousto-optic device and the electric control device in the display field; when the tourists walk in the real-scene map of the display site, the virtual character in the virtual environment actively avoids the tourists through constraint limitation.
2. The system according to claim 1, wherein the real scene modeling module employs a 2D lidar to create a two-dimensional point cloud map within a designated area and convert the point cloud map into a grid map.
3. The system according to claim 1, wherein said virtual scene modeling module uses a 3D projector to create a 3D re-projected map on the grid map.
4. The system according to claim 3, wherein a grid map-based coordinate mapping method is established in the 3D re-projection map, and coordinates of the 3D re-projection map and the grid map are aligned.
5. The system according to claim 1 or 3, wherein the virtual scene modeling module establishes a three-dimensional virtual scene model, and establishes a world coordinate system with position information in the three-dimensional virtual scene model.
6. The system as claimed in claim 5, wherein the world coordinate system calculates the virtual character motion data of the three-dimensional virtual scene model and transmits the virtual character motion data to the sensor fusion module.
7. The system as claimed in claim 6, wherein the sensor fusion module receives the virtual character motion data, and performs data fusion on the virtual character motion data and the grid map to serve as a dynamic obstacle in the grid map.
8. The system as claimed in claim 6, wherein the avatar movement data specifies a start position and an end position of the avatar.
9. The system as claimed in claim 1, wherein the scene interactive display system is configured such that the guest moves slowly in the scene after standing at a designated position, and the position information is recorded on the grid map after the motion position data of the guest is recorded by the 2D lidar.
10. The system as claimed in claim 6, wherein the virtual character movement data establishes an obstacle avoidance method based on dynamic constraints with respect to the visitor movement position data; according to the obstacle avoidance method based on dynamic constraint, when a virtual character meets a tourist, a local path is re-planned to bypass the tourist and then continuously move to the end position.
CN202210176252.4A 2022-02-25 2022-02-25 Virtual environment and real scene fusion display system Pending CN115562474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210176252.4A CN115562474A (en) 2022-02-25 2022-02-25 Virtual environment and real scene fusion display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210176252.4A CN115562474A (en) 2022-02-25 2022-02-25 Virtual environment and real scene fusion display system

Publications (1)

Publication Number Publication Date
CN115562474A true CN115562474A (en) 2023-01-03

Family

ID=84737146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210176252.4A Pending CN115562474A (en) 2022-02-25 2022-02-25 Virtual environment and real scene fusion display system

Country Status (1)

Country Link
CN (1) CN115562474A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116541923A (en) * 2023-04-07 2023-08-04 中国民用航空飞行学院 VR-based indoor installation foundation positioning method for equipment with support
CN117224951A (en) * 2023-11-02 2023-12-15 深圳市洲禹科技有限公司 Pedestrian behavior prediction method and device based on perception and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116541923A (en) * 2023-04-07 2023-08-04 中国民用航空飞行学院 VR-based indoor installation foundation positioning method for equipment with support
CN116541923B (en) * 2023-04-07 2023-12-19 中国民用航空飞行学院 VR-based indoor installation foundation positioning method for equipment with support
CN117224951A (en) * 2023-11-02 2023-12-15 深圳市洲禹科技有限公司 Pedestrian behavior prediction method and device based on perception and electronic equipment

Similar Documents

Publication Publication Date Title
WO2021073292A1 (en) Ar scene image processing method and apparatus, and electronic device and storage medium
WO2018116790A1 (en) Inconsistency detection system, mixed reality system, program, and inconsistency detection method
JP6088094B1 (en) System for creating a mixed reality environment
CN109671118B (en) Virtual reality multi-person interaction method, device and system
Zollmann et al. Flyar: Augmented reality supported micro aerial vehicle navigation
CN115562474A (en) Virtual environment and real scene fusion display system
US9317956B2 (en) Apparatus and method for providing mixed reality contents for learning through story-based virtual experience
US20160343166A1 (en) Image-capturing system for combining subject and three-dimensional virtual space in real time
EP2579128A1 (en) Portable device, virtual reality system and method
JP2018528509A (en) Projected image generation method and apparatus, and mapping method between image pixel and depth value
CN108548300B (en) Air supply method and device of air conditioner and electronic equipment
EP2175636A1 (en) Method and system for integrating virtual entities within live video
JP2013061937A (en) Combined stereo camera and stereo display interaction
CN109725733A (en) Human-computer interaction method and human-computer interaction equipment based on augmented reality
CN109828658A (en) A kind of man-machine co-melting long-range situation intelligent perception system
US20210038975A1 (en) Calibration to be used in an augmented reality method and system
CN106873300B (en) Virtual space projection method and device for intelligent robot
Liu et al. Mobile delivery robots: Mixed reality-based simulation relying on ros and unity 3D
US20180239514A1 (en) Interactive 3d map with vibrant street view
CN111272172A (en) Unmanned aerial vehicle indoor navigation method, device, equipment and storage medium
CN111443723A (en) Program for generating and displaying third visual angle view of unmanned aerial vehicle
JP2018106661A (en) Inconsistency detection system, mixed reality system, program, and inconsistency detection method
KR101641672B1 (en) The system for Augmented Reality of architecture model tracing using mobile terminal
Wither et al. Using aerial photographs for improved mobile AR annotation
Chow Multi-sensor integration for indoor 3D reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination