CN116012680A - CAVE type virtual simulation large screen fusion system based on multiple channels - Google Patents

CAVE type virtual simulation large screen fusion system based on multiple channels Download PDF

Info

Publication number
CN116012680A
CN116012680A CN202211607540.7A CN202211607540A CN116012680A CN 116012680 A CN116012680 A CN 116012680A CN 202211607540 A CN202211607540 A CN 202211607540A CN 116012680 A CN116012680 A CN 116012680A
Authority
CN
China
Prior art keywords
virtual
simulation
dimensional
cave
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211607540.7A
Other languages
Chinese (zh)
Inventor
谭贻国
陈月梅
谭斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Elephant Skills Technology Co ltd
Original Assignee
Shenzhen Elephant Skills Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Elephant Skills Technology Co ltd filed Critical Shenzhen Elephant Skills Technology Co ltd
Priority to CN202211607540.7A priority Critical patent/CN116012680A/en
Publication of CN116012680A publication Critical patent/CN116012680A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a multi-channel CAVE-based virtual simulation large screen fusion system, which relates to the technical field of virtual reality and comprises an intelligent terminal, a wireless receiver, an intelligent controller and a CAVE virtual system.

Description

CAVE type virtual simulation large screen fusion system based on multiple channels
Technical Field
The invention relates to the technical field of virtual reality, in particular to a CAVE type virtual simulation large screen fusion system based on multiple channels.
Background
Virtual simulations are effectively a type of computer system that can create and experience a virtual world. Such virtual world is generated by a computer, and may be a reproduction of the real world or a world in idea, and a user may naturally interact with the virtual world through various sensing channels such as vision, hearing, and touch. The method is characterized in that a three-dimensional virtual world reflecting the change and interaction of a physical object is created for a user in a simulation mode, and a three-dimensional interface for observing the interaction with the virtual world is provided for the user through auxiliary sensing equipment such as a Head Mounted Display (HMD) and data gloves, so that the user can directly participate in and explore the effect and change of the simulation object in the environment, and the immersion sense is generated. Although the traditional single projection surface virtual simulation has improved realism by using a 3D technology, the single-sided projection immersion effect is poor, all fields of view of a user cannot be covered, the 3D projection cannot be fused with a screen, so that the immersion feeling of the user is reduced, and the immersive immersion feeling cannot be provided for the user.
Disclosure of Invention
The invention aims to provide a multi-channel CAVE type virtual simulation large screen fusion system, which is characterized in that a simulation virtual signal of virtual experience is sent out through an intelligent terminal, the simulation virtual signal is sent to an intelligent controller by a wireless receiver, the intelligent controller controls the CAVE virtual system to generate a corresponding virtual environment according to the simulation virtual signal, the CAVE virtual system generates a projected simulation virtual environment on a three-dimensional large screen through a virtual unit according to the received simulation virtual signal by a computer synchronous operation rendering technology, and then the interaction unit interacts or operates with a virtual environment object to complete man-machine interaction and scene free switching of a three-dimensional simulation image so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a CAVE type virtual simulation large screen fusion system based on multiple channels comprises an intelligent terminal, a wireless receiver, an intelligent controller and a CAVE virtual system;
the intelligent terminal is used for generating simulation virtual signals in real time, the intelligent terminal transmits the simulation virtual signals to the intelligent controller through the wireless receiver, and the intelligent controller controls the CAVE virtual system according to the received simulation virtual signals;
the wireless receiver is used for receiving wireless radio frequency signals and is mainly used for receiving simulation virtual signals sent by the intelligent terminal, and the wireless receiver is mainly used for communicating with the intelligent terminal and is generally matched with the intelligent terminal for use;
and the intelligent controller is used for controlling the CAVE virtual system, the intelligent controller controls the CAVE virtual system through the intelligent terminal, and the CAVE virtual system generates a simulation virtual environment according to the control of the intelligent controller.
The CAVE virtual system is used for organically combining high-resolution stereo projection technology, three-dimensional computer graphic technology, acoustic technology and the like to generate a completely immersive virtual environment, and any object in the virtual environment can feel the operation of a participant and implement corresponding changes.
Further, the CAVE virtual system comprises:
the three-dimensional large screen is rendered based on multi-channel three-dimensional display and synchronous operation of the virtual units to form a cave shape projection space, and then the interaction unit interacts or operates with the virtual environment object;
the virtual unit is used for generating a simulation virtual environment, the intelligent controller sends a simulation virtual signal to the virtual unit, and the virtual unit generates a projected simulation virtual environment on the three-dimensional large screen through a computer synchronous operation rendering technology according to the received simulation virtual signal;
the interaction unit is used for acquiring motion data generated by a collected user, capturing motion tracking of the user in real time, determining information entity position data of a motion tracking object crossing X, Y and Z coordinates, processing the collected motion data and uploading the processed motion data to the three-dimensional large screen.
Further, the virtual unit includes:
the CAVE system simulation server is used for receiving the simulation virtual signals sent by the intelligent controller, analyzing and configuring images of the received simulation virtual signals, generating data images, and sending the image data to the active stereo fusion device;
the active stereo fusion device is used for receiving the data images sent by the CAVE system simulation server, carrying out fusion processing on the received data images, synthesizing a three-dimensional stereo simulation image through fusion processing, and sending the synthesized three-dimensional stereo simulation image to the active stereo projection;
the active stereoscopic projection device is used for receiving the three-dimensional stereoscopic simulation image transmitted by the active stereoscopic fusion device, and projecting the received three-dimensional stereoscopic simulation image to a stereoscopic large screen, and the three-dimensional stereoscopic simulation image forms a virtual simulation space through the stereoscopic large screen.
Further, the active stereoscopic projection device is provided with a plurality of large stereoscopic screens, wherein each large stereoscopic screen is divided into four screens, namely a main screen, a lower screen, a left screen and a right screen, and each screen corresponds to one active stereoscopic projection device.
Further, the active stereoscopic fusion device includes:
the filtering sub-module is used for carrying out filtering processing on the received data image and filtering interference noise in the data image;
the enhancer module is used for enhancing the filtered data image so that the image is clearer;
the image fusion sub-module is used for fusing the enhanced data images to form a three-dimensional simulation image;
and the smoothing processing sub-module is used for carrying out smoothing processing on the three-dimensional simulation image, filling the image defect caused by filtering processing and fusion, and enhancing the integrity of the image.
Further, the method comprises the following steps:
the intelligent terminal sends out a simulation virtual signal of virtual experience, the wireless receiver collects the simulation virtual signal sent out by the intelligent terminal and sends the simulation virtual signal to the intelligent controller, and the intelligent controller controls the CAVE virtual system to generate a corresponding simulated virtual environment according to the simulation virtual signal;
the virtual unit analyzes and processes the simulation virtual signal base through the CAVE system simulation server, converts the simulation virtual signals after analysis, converts the simulation virtual signals into data information, analyzes the data information, configures an image to generate a data image, sends the image data to the active stereo fusion device, synthesizes a scene of a three-dimensional stereo simulation image through the active stereo fusion device, and then transmits the synthesized scene of the three-dimensional stereo simulation image to the active stereo projection device;
the three-dimensional simulation image is projected onto the large three-dimensional screen through the active three-dimensional projection equipment, three-dimensional display is realized based on the projection of the active three-dimensional projection equipment, and the active three-dimensional projection equipment synchronously operates and renders the technology, so that a plurality of large three-dimensional screens form a cave-shaped projection space, and then the interaction unit interacts or operates with the virtual environment object to complete man-machine interaction and scene free switching of the three-dimensional simulation image.
Further, the interaction unit includes:
the motion tracking sensor is used for capturing the position of a user and the motion data of limbs, detecting infrared light, and transmitting the position and the motion data of a tracked object to the server and the motion data calculation unit;
the action data calculation unit is used for receiving the user position and the action data transmitted by the action tracking sensor, determining a man-machine interaction instruction according to the user position and the action data, and transmitting the man-machine interaction instruction to the interactive response execution unit;
the interactive response execution unit is used for receiving the man-machine interaction instruction transmitted by the action data calculation unit, executing the target interaction instruction according to the man-machine interaction instruction, and controlling the switching of the three-dimensional large-screen virtual environment
Further, the method comprises the following steps:
tracking a user in real time through an action tracking sensor, capturing and acquiring the position and action data of the user, transmitting the captured and acquired user position and action data to a server and an action data calculation unit, and processing and identifying the user position and the action data through the action data calculation unit;
after the processing and identification of the action data calculation unit, identifying the position of the user and the target information contained in the action data, and determining a man-machine interaction instruction corresponding to the target information, namely a man-machine interaction command or a virtual scene switching command according to the target information;
through the interactive reaction execution unit, and according to the man-machine interaction instruction execution corresponding target command, the user can interact with the virtual environment of the three-dimensional large screen, and control the virtual scene to switch.
Further, the motion tracking sensor is provided in plurality, the position of the motion tracking sensor to the user being defined by an X-axis, a Y-axis and a Z-axis thereof, wherein the X-axis represents a horizontal position with respect to a front portion of the motion tracking area, wherein the Y-axis represents horizontal positions with respect to left and right sides of the motion tracking area, and wherein the Z-axis represents a vertical position with respect to a top side of the motion tracking area.
Further, the motion data calculation unit performs phase compensation on the user position or motion data using the following formula after receiving the user position and motion data transmitted from the motion tracking sensor:
D i =|RD{d i (t,n)exp[-jδ(t,n)]}|
in the above, D i Representing the i-th user position or motion data after phase compensation; RD {. Cndot. Represents the use of a range Doppler imaging algorithm function; d, d i (t, n) represents the i-th user position or motion data transmitted by the received motion tracking sensor; t, n represent the distance direction and the azimuth direction respectively; exp []An exponential function based on a natural constant e; j represents an influence factor of the system; delta (t, n) represents the systematic phase error including radial velocity and acceleration;
and determining a man-machine interaction instruction by adopting the user position and the action data after the phase compensation correction.
Compared with the prior art, the invention has the beneficial effects that:
1. the intelligent terminal sends out a virtual simulation virtual signal of virtual experience, the wireless receiver sends the virtual simulation virtual signal to the intelligent controller, the intelligent controller controls the CAVE virtual system to generate a corresponding virtual environment according to the virtual simulation virtual signal, the CAVE virtual system generates a projected virtual environment on the three-dimensional large screen through a computer synchronous operation rendering technology according to the received virtual simulation virtual signal, and then the interaction unit interacts or operates with a virtual environment object to complete man-machine interaction and scene free switching of the three-dimensional simulation image.
2. The CAVE virtual system is a room type projection visual collaborative environment based on a multichannel visual synchronization technology and a stereoscopic display technology, a virtual reality complete immersion effect is realized through four projection surfaces (three walls and a floor), each projection surface corresponds to an active stereoscopic projection device, so that the projection area can cover all visual fields of a user, an ultra-wide video is provided, no visual angle blind spot exists, the user is completely surrounded by one stereoscopic projection picture, and the CAVE virtual system can provide the user with an immersive immersion feeling.
3. The method comprises the steps of capturing and collecting the position and the motion data of a user through a motion tracking sensor, transmitting the captured and collected position and the motion data to a server and a motion data computing unit, processing and identifying the position and the motion data of the user through the motion data computing unit, identifying target information, determining a man-machine interaction instruction corresponding to the target information according to the target information, executing a corresponding target command according to the man-machine interaction instruction by an interactive response executing unit, enabling the user to interact with a virtual environment of a three-dimensional large screen, and controlling the virtual scene to be switched.
Drawings
FIG. 1 is a schematic diagram of the components of a multi-channel CAVE CAVE-based virtual simulation large screen fusion system of the present invention;
FIG. 2 is a schematic diagram of a CAVE virtual system according to the present invention;
FIG. 3 is a schematic diagram of a virtual cell according to the present invention;
fig. 4 is a schematic diagram of an interactive unit according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the technical problems that the existing 3D technology is used for improving the sense of reality, but the single-sided projection immersion effect is poor, all fields of view of a user cannot be covered, the 3D projection cannot be fused with a screen, the immersion feeling of the user is reduced, and the user cannot be provided with the immersive immersion feeling, please refer to fig. 1-4, the embodiment provides the following technical scheme:
a CAVE type virtual simulation large screen fusion system based on multiple channels comprises an intelligent terminal, a wireless receiver, an intelligent controller and a CAVE virtual system;
the intelligent terminal is used for generating simulation virtual signals in real time, the intelligent terminal transmits the simulation virtual signals to the intelligent controller through the wireless receiver, and the intelligent controller controls the CAVE virtual system according to the received simulation virtual signals;
the wireless receiver is used for receiving wireless radio frequency signals and is mainly used for receiving simulation virtual signals sent by the intelligent terminal, and the wireless receiver is mainly used for communicating with the intelligent terminal and is generally matched with the intelligent terminal for use;
and the intelligent controller is used for controlling the CAVE virtual system, the intelligent controller controls the CAVE virtual system through the intelligent terminal, and the CAVE virtual system generates a simulation virtual environment according to the control of the intelligent controller.
The CAVE virtual system is used for organically combining high-resolution stereo projection technology, three-dimensional computer graphic technology, acoustic technology and the like to generate a completely immersive virtual environment, and any object in the virtual environment can feel the operation of a participant and implement corresponding changes.
The CAVE virtual system comprises:
the three-dimensional large screen is used for synchronously calculating a cave shape projection space formed by a rendering technology based on multi-channel three-dimensional display and the virtual unit, and then the interactive unit interacts or operates with the virtual environment object;
the virtual unit is used for generating a simulation virtual environment, the intelligent controller sends a simulation virtual signal to the virtual unit, and the virtual unit generates a projected simulation virtual environment on the three-dimensional large screen through a computer synchronous operation rendering technology according to the received simulation virtual signal;
the interaction unit is used for acquiring motion data generated by a collected user, capturing motion tracking of the user in real time, determining information entity position data of a motion tracking object crossing X, Y and Z coordinates, processing the collected motion data and uploading the processed motion data to the three-dimensional large screen.
Specifically, the intelligent terminal sends a virtual experience simulation virtual signal, the wireless receiver sends the simulation virtual signal to the intelligent controller, the intelligent controller controls the CAVE virtual system to generate a corresponding virtual environment according to the simulation virtual signal, the CAVE virtual system generates a projected simulation virtual environment on the three-dimensional large screen through a computer synchronous operation rendering technology according to the received simulation virtual signal, then the interaction unit interacts or operates with a virtual environment object to complete man-machine interaction and scene free switching of a three-dimensional simulation image, and the CAVE virtual system organically combines a high-resolution three-dimensional projection technology, a three-dimensional computer graphic technology, an acoustic technology and the like to generate a fully immersed virtual environment, any object in the virtual environment can feel operation of a participant, and implement corresponding change.
The virtual unit includes:
the CAVE system simulation server is used for receiving the simulation virtual signals sent by the intelligent controller, analyzing and configuring images of the received simulation virtual signals, generating data images, and sending the image data to the active stereo fusion device;
the active stereo fusion device is used for receiving the data images sent by the CAVE system simulation server, carrying out fusion processing on the received data images, synthesizing a three-dimensional stereo simulation image through fusion processing, and sending the synthesized three-dimensional stereo simulation image to the active stereo projection;
the active stereoscopic projection device is used for receiving the three-dimensional stereoscopic simulation image transmitted by the active stereoscopic fusion device, and projecting the received three-dimensional stereoscopic simulation image to a stereoscopic large screen, and the three-dimensional stereoscopic simulation image forms a virtual simulation space through the stereoscopic large screen.
The active stereoscopic projection device is provided with a plurality of large stereoscopic screens, wherein each large stereoscopic screen is divided into four screens, namely a main screen, a lower screen, a left screen and a right screen, and each screen corresponds to one active stereoscopic projection device.
It should be noted that, through four projection surfaces (three walls and a floor), the virtual reality complete immersion effect is realized, the synchronous simulation efficiency of multichannel and the game scene immersion sense are promoted, each projection surface corresponds an initiative stereoscopic projection equipment, make the projection area can cover all fields of vision of user, ultra wide video, no visual angle blind spot, the user is surrounded by a stereoscopic projection picture completely, CAVE virtual system can provide the immersive sense that the user is personally on the scene.
The active stereoscopic fusion device comprises:
the filtering sub-module is used for carrying out filtering processing on the received data image and filtering interference noise in the data image;
the enhancer module is used for enhancing the filtered data image so that the image is clearer;
the image fusion sub-module is used for fusing the enhanced data images to form a three-dimensional simulation image;
and the smoothing processing sub-module is used for carrying out smoothing processing on the three-dimensional simulation image, filling the image defect caused by filtering processing and fusion, and enhancing the integrity of the image.
The method comprises the steps of filtering interference noise in a data image through a set filtering sub-module, enhancing the data image by an enhancement sub-module to make the image clearer, fusing the data image by an image fusion sub-module to form a three-dimensional simulation image, and smoothing by a smoothing sub-module to fill the image defect caused by filtering and fusion; by the processing method, the obtained three-dimensional simulation image is clearer, the integrity is better, the quality and the precision of the three-dimensional simulation image are improved, and control deviation caused by the image quality is avoided.
The method comprises the following steps:
the intelligent terminal sends out a simulation virtual signal of virtual experience, the wireless receiver collects the simulation virtual signal sent out by the intelligent terminal and sends the simulation virtual signal to the intelligent controller, and the intelligent controller controls the CAVE virtual system to generate a corresponding simulated virtual environment according to the simulation virtual signal;
the virtual unit analyzes and processes the simulation virtual signal base through the CAVE system simulation server, converts the simulation virtual signals after analysis, converts the simulation virtual signals into data information, analyzes the data information, configures an image to generate a data image, sends the image data to the active stereo fusion device, synthesizes a scene of a three-dimensional stereo simulation image through the active stereo fusion device, and then transmits the synthesized scene of the three-dimensional stereo simulation image to the active stereo projection device;
the three-dimensional simulation image is projected onto the large three-dimensional screen through the active three-dimensional projection equipment, three-dimensional display is realized based on the projection of the active three-dimensional projection equipment, and the active three-dimensional projection equipment synchronously operates and renders the technology, so that a plurality of large three-dimensional screens form a cave-shaped projection space, and then the interaction unit interacts or operates with the virtual environment object to complete man-machine interaction and scene free switching of the three-dimensional simulation image.
The interaction unit includes:
the motion tracking sensor is used for capturing the position of a user and the motion data of limbs, detecting infrared light, and transmitting the position and the motion data of a tracked object to the server and the motion data calculation unit;
the action data calculation unit is used for receiving the user position and the action data transmitted by the action tracking sensor, determining a man-machine interaction instruction according to the user position and the action data, and transmitting the man-machine interaction instruction to the interactive response execution unit;
and the interactive response execution unit is used for receiving the human-computer interaction instruction transmitted by the action data calculation unit, executing the target interaction instruction according to the human-computer interaction instruction and controlling the switching of the three-dimensional large-screen virtual environment.
The method comprises the following steps:
tracking a user in real time through an action tracking sensor, capturing and acquiring the position and action data of the user, transmitting the captured and acquired user position and action data to a server and an action data calculation unit, and processing and identifying the user position and the action data through the action data calculation unit;
after the processing and identification of the action data calculation unit, identifying the position of the user and the target information contained in the action data, and determining a man-machine interaction instruction corresponding to the target information, namely a man-machine interaction command or a virtual scene switching command according to the target information;
through the interactive reaction execution unit, and according to the man-machine interaction instruction execution corresponding target command, the user can interact with the virtual environment of the three-dimensional large screen, and control the virtual scene to switch.
The motion tracking sensor is provided in plurality, the position of the motion tracking sensor to the user being defined by an X-axis, a Y-axis and a Z-axis thereof, wherein the X-axis represents a horizontal position with respect to a front portion of the motion tracking area, wherein the Y-axis represents horizontal positions with respect to a left side and a right side of the motion tracking area, and wherein the Z-axis represents a vertical position with respect to a top side of the motion tracking area.
Note that, in reality, the X-axis and the Y-axis represent 2D horizontal positions, and the Z-axis represents vertical positions in the motion tracking area, as the user moves, the X, Y, Z physical data of the object changes, the X, Y, Z data of the captured object changes, and is synchronously processed in the motion data calculation unit, and by the calculation processing of the motion data calculation unit, the calculated corresponding object position in the virtual world can be seen in the stereoscopic large screen.
Specifically, a user sends a virtual experience simulation virtual signal through an intelligent terminal, a wireless receiver sends the simulation virtual signal to an intelligent controller, the intelligent controller controls a CAVE virtual system to generate a corresponding virtual environment according to the simulation virtual signal, the CAVE virtual system analyzes and processes a simulation virtual signal base and converts the simulation virtual signal base into data information according to the control of the intelligent controller, the converted data information is analyzed and configured into an image, a data image is generated, image data is sent to an active stereo fusion device, a three-dimensional simulation image scene is synthesized through the active stereo fusion device, the synthesized three-dimensional simulation image scene is transmitted to an active stereo projection device for throwing, a plurality of active stereo projection devices are matched with a plurality of stereo large screens to realize a virtual reality full immersion effect, when a user experiences a virtual environment, the user captures and collects the actions of the user and limbs in real time through the action tracking sensor, the captured and collected user position and action data are transmitted to the server and the action data computing unit, the user position and the action data are processed and identified through the action data computing unit, after the processing and identification of the action data computing unit, X, Y, Z data of the captured object are synchronously processed, the calculated corresponding object position in the virtual world can be seen through the calculation processing of the action data computing unit, the action data of the user are identified, the target information included in the user action data is identified, and a man-machine interaction instruction corresponding to the target information, namely a man-machine interaction command is determined according to the target information, the virtual scene on the three-dimensional large screen is controlled to be switched, the corresponding target command is executed according to the man-machine interaction instruction through the interactive reaction execution unit, the user can interact with the virtual environment of the three-dimensional large screen, and the virtual scene is controlled to be switched, so that the user can freely switch the virtual scene in the process of experiencing virtual reality, and convenience is brought to the virtual experience of the user.
After receiving the user position and the motion data transmitted by the motion tracking sensor, the motion data calculating unit performs phase compensation on the user position or the motion data by adopting the following formula:
D i =|RD{d i (t,n)exp[-jδ(t,n)]}|
in the above, D i Representing the i-th user position or motion data after phase compensation; RD {. Cndot. Represents the use of a range Doppler imaging algorithm function; d, d i (t, n) represents the i-th user position or motion data transmitted by the received motion tracking sensor; t, n represent the distance direction and the azimuth direction respectively; exp []An exponential function based on a natural constant e; j represents an influence factor of the system; delta (t, n) represents the systematic phase error including radial velocity and acceleration;
and determining a man-machine interaction instruction by adopting the user position and the action data after the phase compensation correction.
The influence on the data transmission of the system can be eliminated or reduced by carrying out phase compensation on the user position and the action data acquired by the received action tracking sensor, so that the data error caused by the response lag of the system to the acquired signal is avoided, and the man-machine interaction control precision is improved; the algorithm formula is easy to operate, simple to calculate, high in calculation efficiency and suitable for the real-time requirement of data acquisition; the system phase error can be obtained by carrying out experimental test on the actually adopted system.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should be covered by the protection scope of the present invention by making equivalents and modifications to the technical solution and the inventive concept thereof.

Claims (10)

1. A CAVE type virtual simulation large screen fusion system based on multiple channels comprises an intelligent terminal, a wireless receiver, an intelligent controller and a CAVE virtual system;
the intelligent terminal is used for generating simulation virtual signals in real time, the intelligent terminal transmits the simulation virtual signals to the intelligent controller through the wireless receiver, and the intelligent controller controls the CAVE virtual system according to the received simulation virtual signals;
the wireless receiver is used for receiving wireless radio frequency signals and receiving simulation virtual signals sent by the intelligent terminal, and is used for communicating with the intelligent terminal;
the intelligent controller controls the CAVE virtual system through the intelligent terminal, and the CAVE virtual system generates a simulation virtual environment according to the control of the intelligent controller;
the CAVE virtual system is used for combining high-resolution stereo projection, three-dimensional computer graphics and sound together to generate a fully immersive virtual environment.
2. The multi-channel CAVE-based virtual simulation large-screen fusion system of claim 1, wherein the CAVE virtual system comprises:
the three-dimensional large screen is rendered based on multi-channel three-dimensional display and synchronous operation of the virtual units to form a cave shape projection space, and then the interaction unit interacts or operates with the virtual environment object;
the virtual unit is used for generating a simulation virtual environment, the intelligent controller sends a simulation virtual signal to the virtual unit, and the virtual unit generates a projected simulation virtual environment on the three-dimensional large screen through computer synchronous operation rendering according to the received simulation virtual signal;
the interaction unit is used for acquiring motion data generated by a collected user, capturing motion tracking of the user in real time, determining information entity position data of a motion tracking object crossing X, Y and Z coordinates, processing the collected motion data and uploading the processed motion data to the three-dimensional large screen.
3. The multi-channel CAVE-based virtual simulation large-screen fusion system as claimed in claim 2, wherein the virtual unit comprises:
the CAVE system simulation server is used for receiving the simulation virtual signals sent by the intelligent controller, analyzing and configuring images of the received simulation virtual signals, generating data images, and sending the image data to the active stereo fusion device;
the active stereo fusion device is used for receiving the data images sent by the CAVE system simulation server, carrying out fusion processing on the received data images, synthesizing a three-dimensional stereo simulation image through fusion processing, and sending the synthesized three-dimensional stereo simulation image to the active stereo projection;
the active stereoscopic projection device is used for receiving the three-dimensional stereoscopic simulation image transmitted by the active stereoscopic fusion device, and projecting the received three-dimensional stereoscopic simulation image to a stereoscopic large screen, and the three-dimensional stereoscopic simulation image forms a virtual simulation space through the stereoscopic large screen.
4. A multi-channel CAVE-based virtual simulation large screen fusion system according to claim 3, wherein a plurality of active stereoscopic projection devices are provided, and each of the active stereoscopic projection devices is divided into four screens, namely a main screen, a lower screen, a left screen and a right screen.
5. A multi-channel CAVE-based virtual simulation large-screen fusion system as claimed in claim 3, wherein the active stereoscopic fusion device comprises:
the filtering sub-module is used for carrying out filtering processing on the received data image and filtering interference noise in the data image;
the enhancer module is used for enhancing the filtered data image so that the image is clearer;
the image fusion sub-module is used for fusing the enhanced data images to form a three-dimensional simulation image;
and the smoothing processing sub-module is used for carrying out smoothing processing on the three-dimensional simulation image, filling the image defect caused by filtering processing and fusion, and enhancing the integrity of the image.
6. A multi-channel CAVE-based virtual simulation large screen fusion system as claimed in claims 1-3, wherein the implementation of the system comprises the following steps:
the intelligent terminal sends out a simulation virtual signal of virtual experience, the wireless receiver collects the simulation virtual signal sent out by the intelligent terminal and sends the simulation virtual signal to the intelligent controller, and the intelligent controller controls the CAVE virtual system to generate a corresponding simulated virtual environment according to the simulation virtual signal;
the virtual unit analyzes and processes the simulation virtual signal base through the CAVE system simulation server, converts the simulation virtual signals after analysis, converts the simulation virtual signals into data information, analyzes the data information, configures an image to generate a data image, sends the image data to the active stereo fusion device, synthesizes a scene of a three-dimensional stereo simulation image through the active stereo fusion device, and then transmits the synthesized scene of the three-dimensional stereo simulation image to the active stereo projection device;
the three-dimensional simulation image is projected onto the large three-dimensional screen through the active three-dimensional projection equipment, three-dimensional display is realized based on the projection of the active three-dimensional projection equipment, and the active three-dimensional projection equipment synchronously operates and renders the technology, so that a plurality of large three-dimensional screens form a cave-shaped projection space, and then the interaction unit interacts or operates with the virtual environment object to complete man-machine interaction and scene free switching of the three-dimensional simulation image.
7. The multi-channel CAVE-based virtual simulation large-screen fusion system as claimed in claim 2, wherein the interaction unit comprises:
the motion tracking sensor is used for capturing the position of a user and the motion data of limbs, detecting infrared light, and transmitting the position and the motion data of a tracked object to the server and the motion data calculation unit;
the action data calculation unit is used for receiving the user position and the action data transmitted by the action tracking sensor, determining a man-machine interaction instruction according to the user position and the action data, and transmitting the man-machine interaction instruction to the interactive response execution unit;
and the interactive response execution unit is used for receiving the human-computer interaction instruction transmitted by the action data calculation unit, executing the target interaction instruction according to the human-computer interaction instruction and controlling the switching of the three-dimensional large-screen virtual environment.
8. The multi-channel CAVE based virtual simulation large screen fusion system as claimed in claim 7, wherein the implementation of the system further comprises the steps of:
tracking a user in real time through an action tracking sensor, capturing and acquiring the position and action data of the user, transmitting the captured and acquired user position and action data to a server and an action data calculation unit, and processing and identifying the user position and the action data through the action data calculation unit;
after the processing identification of the action data calculation unit, the user position and the target information contained in the action data are identified, a man-machine interaction instruction corresponding to the target information is determined according to the target information, and a corresponding target command is executed according to the man-machine interaction instruction to switch the virtual scene.
9. The multi-channel CAVE-based virtual simulated large screen fusion system of claim 7, wherein said motion tracking sensor is provided in plurality, the position of the motion tracking sensor to the user being defined by an X-axis, a Y-axis, and a Z-axis thereof, wherein the X-axis represents a horizontal position relative to the front of the motion tracking area, wherein the Y-axis represents a horizontal position relative to the left and right sides of the motion tracking area, and wherein the Z-axis represents a vertical position relative to the top side of the motion tracking area.
10. The multi-channel CAVE-based virtual simulation large-screen fusion system of claim 7, wherein the motion data calculation unit performs phase compensation on the user position or motion data after receiving the user position and motion data transmitted by the motion tracking sensor by using the following formula:
D i =|RD{d i (t,n)exp[-jδ(t,n)]}|
in the above, D i Representing the i-th user position or motion data after phase compensation; RD {. Cndot. Represents the use of a range Doppler imaging algorithm function; d, d i (t, n) represents the i-th user position or motion data transmitted by the received motion tracking sensor; t, n represent the distance direction and the azimuth direction respectively; exp []An exponential function based on a natural constant e; j represents an influence factor of the system; delta (t, n) represents the systematic phase error including radial velocity and acceleration;
and determining a man-machine interaction instruction by adopting the user position and the action data after the phase compensation correction.
CN202211607540.7A 2022-12-14 2022-12-14 CAVE type virtual simulation large screen fusion system based on multiple channels Pending CN116012680A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211607540.7A CN116012680A (en) 2022-12-14 2022-12-14 CAVE type virtual simulation large screen fusion system based on multiple channels

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211607540.7A CN116012680A (en) 2022-12-14 2022-12-14 CAVE type virtual simulation large screen fusion system based on multiple channels

Publications (1)

Publication Number Publication Date
CN116012680A true CN116012680A (en) 2023-04-25

Family

ID=86036451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211607540.7A Pending CN116012680A (en) 2022-12-14 2022-12-14 CAVE type virtual simulation large screen fusion system based on multiple channels

Country Status (1)

Country Link
CN (1) CN116012680A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292094A (en) * 2023-11-23 2023-12-26 南昌菱形信息技术有限公司 Digitalized application method and system for performance theatre in karst cave
CN117420916A (en) * 2023-12-18 2024-01-19 北京黑油数字展览股份有限公司 Immersion type CAVE system based on holographic image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292094A (en) * 2023-11-23 2023-12-26 南昌菱形信息技术有限公司 Digitalized application method and system for performance theatre in karst cave
CN117292094B (en) * 2023-11-23 2024-02-02 南昌菱形信息技术有限公司 Digitalized application method and system for performance theatre in karst cave
CN117420916A (en) * 2023-12-18 2024-01-19 北京黑油数字展览股份有限公司 Immersion type CAVE system based on holographic image

Similar Documents

Publication Publication Date Title
CN116012680A (en) CAVE type virtual simulation large screen fusion system based on multiple channels
KR102077108B1 (en) Apparatus and method for providing contents experience service
CN103793060B (en) A kind of user interactive system and method
US5495576A (en) Panoramic image based virtual reality/telepresence audio-visual system and method
CN108334199A (en) The multi-modal exchange method of movable type based on augmented reality and device
JP2013061937A (en) Combined stereo camera and stereo display interaction
JP2009252240A (en) System, method and program for incorporating reflection
CN104050859A (en) Interactive digital stereoscopic sand table system
TW201246088A (en) Theme-based augmentation of photorepresentative view
CN104536579A (en) Interactive three-dimensional scenery and digital image high-speed fusing processing system and method
US10606241B2 (en) Process planning apparatus based on augmented reality
CN107274491A (en) A kind of spatial manipulation Virtual Realization method of three-dimensional scenic
JP3242079U (en) Floating image display device and floating image display system
CN105824417B (en) human-object combination method adopting virtual reality technology
EP3264228A1 (en) Mediated reality
CN111986334A (en) Hololens and CAVE combined virtual experience system and method
CN105979330A (en) Somatosensory button location method and device
US20210245368A1 (en) Method for virtual interaction, physical robot, display terminal and system
Blach Virtual reality technology-an overview
Tamagawa et al. Developing a 2.5-D video avatar
CN117826976A (en) XR-based multi-person collaboration method and system
WO2022047768A1 (en) Virtual experience system and method combining hololens and cave
CN115908755A (en) AR projection method, system and AR projector
RU2695053C1 (en) Method and device for control of three-dimensional objects in virtual space
Siegl et al. An augmented reality human–computer interface for object localization in a cognitive vision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination