CN113256815A - Virtual reality scene fusion and playing method and virtual reality equipment - Google Patents

Virtual reality scene fusion and playing method and virtual reality equipment Download PDF

Info

Publication number
CN113256815A
CN113256815A CN202110205130.9A CN202110205130A CN113256815A CN 113256815 A CN113256815 A CN 113256815A CN 202110205130 A CN202110205130 A CN 202110205130A CN 113256815 A CN113256815 A CN 113256815A
Authority
CN
China
Prior art keywords
scene
virtual
user
fusion
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110205130.9A
Other languages
Chinese (zh)
Other versions
CN113256815B (en
Inventor
章志华
张帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huaqing Yitong Technology Co ltd
Original Assignee
Beijing Huaqing Yitong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huaqing Yitong Technology Co ltd filed Critical Beijing Huaqing Yitong Technology Co ltd
Priority to CN202110205130.9A priority Critical patent/CN113256815B/en
Publication of CN113256815A publication Critical patent/CN113256815A/en
Application granted granted Critical
Publication of CN113256815B publication Critical patent/CN113256815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Stereophonic System (AREA)

Abstract

The invention discloses a virtual reality scene fusion and playing method and virtual reality equipment, wherein the virtual reality scene fusion and playing method comprises the steps of obtaining dynamic three-dimensional grid data of at least one actual scene and obtaining three-dimensional sound field data of the at least one actual scene; fusing the dynamic three-dimensional grid data and the stereo field data to a user virtual space to obtain a virtual fusion scene; acquiring state information of a user in the virtual fusion scene; obtaining corresponding target visual information and target sound sensation information in the virtual fusion scene according to the state information; and playing the target visual information and the target sound sensation information. The method can improve the sense of reality and the sense of immersion of the user on the virtual reality scene.

Description

Virtual reality scene fusion and playing method and virtual reality equipment
Technical Field
The invention relates to the technical field of virtual reality, in particular to a virtual reality scene fusion and playing method and virtual reality equipment.
Background
The virtual reality technology is a computer simulation system capable of creating and experiencing a virtual world, which utilizes a computer to generate a simulation environment, and is a system simulation of multi-source information fusion, interactive three-dimensional dynamic visual and entity behaviors, so that a user is immersed in the environment.
In the related art, the production of virtual reality content mainly comes from two aspects: the method comprises the steps of firstly, making three-dimensional virtual content by a computer; and the other is to perform image video acquisition through a proprietary device such as a panoramic camera and the like to form a video stream for playing.
However, both of the above methods of producing virtual reality contents have a problem of insufficient immersion. The computer-generated virtualized content is limited by a three-dimensional generation level, so that the expressive force of the virtualized scene cannot reach the realistic experience brought by the real image. For contents manufactured by a panoramic camera and the like based on a plane acquisition technology, the principle is equivalent to two-dimensional video playing with super-large resolution, and due to the fact that three-dimensional information does not exist, the scene can not be played in a diffuse way, and the stereo effect is not considered. In addition, the virtual reality contents are all made in advance, and real-time and dynamic tour experience cannot be realized, so that the feeling of the user on stereoscopic impression and immersion feeling is not enough.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, an object of the present invention is to provide a virtual reality scene fusion and playing method, which can improve the reality and immersion of the user on the virtual reality scene.
The second objective of the present invention is to provide a virtual reality device.
The invention also aims to provide a virtual reality device.
In order to solve the above problem, an embodiment of the first aspect of the present invention provides a virtual reality scene fusion and playing method, including obtaining dynamic three-dimensional mesh data of at least one actual scene, and obtaining stereo field data of the at least one actual scene; fusing the dynamic three-dimensional grid data and the stereo field data to a user virtual space to obtain a virtual fusion scene; acquiring state information of a user in the virtual fusion scene; obtaining corresponding target visual information and target sound sensation information in the virtual fusion scene according to the state information; and playing the target visual information and the target sound sensation information.
According to the virtual reality scene fusion and playing method provided by the embodiment of the invention, the dynamic three-dimensional grid data of the actual scene is collected, the scene roaming of a user can be realized, the sound effect can be improved by collecting the stereo field data, the three-dimensional information and the stereo effect of the scene are combined by fusing the obtained dynamic three-dimensional grid data and the stereo field data of at least one actual scene into the virtual space of the user, and when the user roams in the virtual fusion scene, the target visual information and the target sound sensation information are played according to the user states such as position and orientation, so that the sense of reality and the sense of immersion of the user in the virtual fusion scene are improved, and the feeling of being personally on the scene is achieved.
In some embodiments, acquiring dynamic three-dimensional mesh data of at least one actual scene comprises: continuously acquiring three-dimensional information of the actual scene and color information of the actual scene; registering the three-dimensional information and the color information of the corresponding time sequence of the actual scene to obtain three-dimensional color point cloud data; and performing three-dimensional reconstruction on the three-dimensional color point cloud data to obtain dynamic three-dimensional grid data of the actual scene.
In some embodiments, fusing the dynamic three-dimensional mesh data and the stereo field data to a user virtual space, obtaining a virtual fused scene, comprises: acquiring set position information corresponding to the dynamic three-dimensional grid data in the user virtual space; and setting the dynamic three-dimensional grid data in the user virtual space according to the set position information so as to obtain visual information in a virtual fusion scene.
In some embodiments, fusing the dynamic three-dimensional mesh data and the stereo field data to a user virtual space to obtain a virtual fused scene, further comprising: acquiring the distance and the separation included angle from the three-dimensional model center corresponding to the dynamic three-dimensional grid data to the user visual angle center in the virtual fusion scene; and obtaining the sound sensation information in the virtual fusion scene according to the distance and the separation angle between the three-dimensional model center corresponding to the dynamic three-dimensional grid data and the user visual angle center in the virtual fusion scene and a Head Related Transfer Function (HRTF).
In some embodiments, after fusing the dynamic three-dimensional mesh data to the user virtual space, the method further comprises: obtaining the number of refraction paths from the scene light source to the user visual angle in the user virtual space; acquiring the incident direction and the emergent direction of the light on each refraction path; obtaining the brightness of the scene light source at the visual angle of the user according to the number of the refraction paths, and the incident direction and the emergent direction of the light on each refraction path; and controlling the scene light source to provide the light with the brightness to the user visual angle.
An embodiment of a second aspect of the present invention provides a virtual reality device, including: the visual acquisition module is used for acquiring three-dimensional information and color information of an actual scene; the sound acquisition module is used for acquiring stereo field data of an actual scene; the positioning module is used for acquiring the state information of the user in the virtual scene; the data processing module is used for executing the virtual reality scene fusion and playing method in the embodiment; and the playing module is connected with the data processing module and is used for playing the visual information and the visual information of the virtual scene.
According to the virtual reality equipment provided by the embodiment of the invention, the data processing module executes the virtual reality scene fusion and playing method provided by the embodiment, when a user roams in the virtual fusion scene, the playing module is controlled to play the visual information and the visual information of the virtual scene according to the user state information such as position and orientation, so that the reality sense and the immersion sense of the user on the virtual scene are improved, and the feeling of being personally on the scene is achieved.
In some embodiments, the vision acquisition module comprises: the depth camera is used for acquiring three-dimensional information of the actual scene; and the color camera is used for acquiring color information of the actual scene.
In some embodiments, the vision acquisition module is a plurality.
In some embodiments, the playback module comprises: the display unit is connected with the data processing module and used for displaying the visual information of the virtual scene; and the sound playing unit is connected with the data processing module and is used for playing the sound sensation information of the virtual scene.
An embodiment of a third aspect of the present invention provides a virtual reality device, including: at least one processor; a memory communicatively coupled to the at least one processor; the memory stores a computer program executable by the at least one processor, and the at least one processor implements the virtual reality scene fusion and playing method of the above embodiments when executing the computer program.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a virtual reality scene fusion and playback method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a virtual fusion scene after fusing dynamic three-dimensional mesh data according to one embodiment of the invention;
FIG. 3 is a schematic diagram of a virtual fusion scene fusing stereo field data according to one embodiment of the invention;
FIG. 4 is a diagram illustrating capturing luminance of a scene light source at a user viewing angle according to an embodiment of the present invention;
fig. 5 is a block diagram of a virtual reality device according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below, the embodiments described with reference to the drawings being illustrative, and the embodiments of the present invention will be described in detail below.
In order to solve the above problem, an embodiment of the first aspect of the present invention provides a method for fusing and playing a virtual reality scene, which can improve the sense of reality and the sense of immersion of a user on the virtual reality scene.
The virtual reality scene fusion and playing method according to the embodiment of the invention is described below with reference to the accompanying drawings.
As shown in fig. 1, the virtual reality scene fusion and playing method provided by the embodiment of the invention at least includes steps S1-S5.
Step S1, acquiring dynamic three-dimensional mesh data of at least one actual scene, and acquiring stereo field data of at least one actual scene.
Specifically, by acquiring dynamic three-dimensional grid data of at least one actual scene, compared with the pre-made virtual reality content, the embodiment of the invention carries out three-dimensional reconstruction by using the dynamic actual scene acquired in real time to realize the capture of the dynamic scene, so that the expressive force of the virtual scene can reach the sense of reality brought by a real image, and the realization of real-time dynamic scene roaming is facilitated. And the stereo effect of the virtual scene is also considered, namely the stereo field data of at least one actual scene is obtained, so that the stereo field data can be broadcast in the virtual scene according to the requirement in the subsequent process, and the sound effect of the virtual scene is improved.
And step S2, fusing the dynamic three-dimensional grid data and the stereo field data to a user virtual space to obtain a virtual fusion scene.
The user virtual space may be understood as a virtual space provided by a virtual reality device when a user roams a virtual scene, for example, as shown in fig. 2, it may be assumed as a three-dimensional stereo space, and a plurality of three-dimensional virtual scenes may exist in the three-dimensional stereo space. The virtual fusion scene can be understood as a scene presented after different three-dimensional virtual scenes are simultaneously fused in a user virtual space.
Specifically, a plurality of three-dimensional virtual scenes can be presented in the user virtual space, but different three-dimensional virtual scenes may be acquired for different actual scenes, so that, to acquire the entire three-dimensional virtual scene presented in the user virtual space, dynamic three-dimensional mesh data and stereo field data acquired by different actual scenes need to be fused according to the actual requirements of the user, so that the dynamic three-dimensional mesh data and the stereo field data acquired by different actual scenes are placed in the same virtual space, i.e., a virtual fusion scene is acquired, thereby improving the sense of reality and the sense of immersion of the user in the virtual fusion scene, and achieving the feeling of being personally on the scene.
For example, as shown in fig. 2, the user virtual space includes two different three-dimensional virtual scenes, i.e., a virtual scene 1 such as a rabbit and a virtual scene 2 such as a ball. Specifically, dynamic three-dimensional grid data and stereo field data of the virtual scene 1 and the virtual scene 2 are dynamically collected in different actual scenes respectively, and the dynamic three-dimensional grid data and the stereo field data of the virtual scene 1 and the virtual scene 2 are fused into a user virtual space, so that a virtual fusion scene formed by fusing two different three-dimensional virtual scenes of the virtual scene 1 and the virtual scene 2 can be presented in the user virtual space.
And step S3, acquiring the state information of the user in the virtual fusion scene.
In the embodiment, it can be understood that, in different states, such as the head orientation of the user or the position where the user is located, the virtual fusion scenes experienced by the user are different, and therefore, when the user roams the virtual fusion scene, the state information of the user in the virtual space of the user needs to be acquired in real time, so that the state information of the user is combined to present the required virtual fusion scene to the user.
And step S4, obtaining corresponding target visual information and target sound sensation information in the virtual fusion scene according to the state information.
The target visual information can be understood as a virtual image seen by the user at the current viewing angle. The target sound sensation information may be understood as a sound heard by the user at the current location.
Specifically, according to the current state information of the user in the user virtual space, such as the current spatial position information of the user in the user virtual space and the current head orientation information of the user, a currently required virtual fusion scene is presented to the user, that is, corresponding target visual information and target sound sensation information in the virtual fusion scene are obtained.
In step S5, the target visual information and the target acoustic information are played.
Specifically, the virtual fusion scene is obtained by fusing the acquired dynamic three-dimensional grid data and the stereoscopic sound field data of at least one actual scene, and when the user roams in the virtual fusion scene, the target visual information and the target sound sensation information are played for the user in real time by combining the state information of the user in real time, so that the tour experience of the user can be met visually and auditorily, the sense of reality and the sense of immersion of the user on the virtual fusion scene can be improved, and the user can feel like the scene.
According to the virtual reality scene fusion and playing method provided by the embodiment of the invention, the dynamic three-dimensional grid data of the actual scene is collected, the scene roaming of a user can be realized, the sound effect can be improved by collecting the stereo field data, the three-dimensional information and the stereo effect of the scene are combined by fusing the obtained dynamic three-dimensional grid data and the stereo field data of at least one actual scene into the virtual space of the user, and when the user roams in the virtual fusion scene, the target visual information and the target sound sensation information are played according to the user state information such as position and orientation, so that the sense of reality and the sense of immersion of the user in the virtual fusion scene are improved, and the feeling of being personally on the scene is achieved.
In some embodiments, for acquiring the dynamic three-dimensional grid data of at least one actual scene, continuously acquiring three-dimensional information of the actual scene and color information of the actual scene; registering three-dimensional information and color information of a corresponding time sequence of an actual scene to obtain three-dimensional color point cloud data; and performing three-dimensional reconstruction on the three-dimensional color point cloud data to obtain dynamic three-dimensional grid data of the actual scene.
Specifically, continuous three-dimensional information on the time sequence of the actual scene, that is, three-dimensional point cloud data, may be acquired by the TOF camera, and continuous color information on the time sequence of the actual scene, that is, color data, may be acquired by the RGB camera. And then, registering the three-dimensional information and the color information of the corresponding time sequence of the actual scene to obtain three-dimensional color point cloud data, and performing three-dimensional reconstruction on the three-dimensional color point cloud data through a three-dimensional reconstruction technology such as a voxel reconstruction algorithm to obtain dynamic three-dimensional grid data of the actual scene.
In some embodiments, for fusing the dynamic three-dimensional mesh data and the stereo field data to the user virtual space to obtain a virtual fusion scene, obtaining set position information corresponding to the dynamic three-dimensional mesh data in the user virtual space may be obtained; and setting the dynamic three-dimensional grid data in a user virtual space according to the set position information to obtain visual information in the virtual fusion scene.
That is, for the plurality of dynamic three-dimensional grid data obtained by collection, a corresponding position can be arranged for each dynamic three-dimensional grid data in the user virtual space according to the actual needs of the user, so that the fusion of the plurality of dynamic three-dimensional grid data in the user virtual space is realized. For example, as shown in fig. 2, the virtual scene 1 and the virtual scene 2 are obtained by acquiring and reconstructing dynamic three-dimensional mesh data of different actual scenes, and for the two virtual scenes acquired and reconstructed, coordinates of the two virtual scenes are set in a user virtual space, that is, the dynamic three-dimensional mesh data is set in the user virtual space according to set position information, so as to obtain visual information in the virtual fusion scene. Furthermore, according to the obtained visual information in the virtual fusion scene and in combination with the state information of the user, the dynamic tour experience of the user in the vision can be realized, and the sense of reality and the sense of immersion of the user on the virtual reality scene are improved.
In some embodiments, for fusing the dynamic three-dimensional grid data and the stereo field data to the user virtual space to obtain a virtual fusion scene, the method may further include obtaining a distance and a separation angle between a three-dimensional model center corresponding to the dynamic three-dimensional grid data in the virtual fusion scene and a user view angle center; and obtaining the sound sensation information in the virtual fusion scene according to the distance from the three-dimensional model center corresponding to the dynamic three-dimensional grid data in the virtual fusion scene to the user visual angle center, the separation included angle and the head-related transmission function.
Wherein, the sound sensation information in the virtual fusion scene can be understood as sound field signals of stereo reaching ears of a user generated by the virtual fusion scene. The three-dimensional model center corresponding to the dynamic three-dimensional grid data in the virtual fusion scene can be understood as the center of gravity of the three-dimensional model obtained according to the dynamic three-dimensional grid data reconstruction. Specifically, the acoustic information in the virtual fusion scene is obtained from a distance from a three-dimensional model center corresponding to the dynamic three-dimensional mesh data in the virtual fusion scene to a user view center, a separation angle between the three-dimensional model center corresponding to the dynamic three-dimensional mesh data in the virtual fusion scene and the user view center, and a head-related transfer function, and the calculation formula is specifically as follows.
Figure BDA0002950124640000061
Figure BDA0002950124640000062
Figure BDA0002950124640000063
Wherein S islAs sound field signal of the left ear of the user, SrAs a sound field signal of the user's right ear, di(theta) is a spatial attenuation function, hi(theta) is a head-related transmission function obtained based on the virtual fusion scene, d is the distance from the three-dimensional model center corresponding to the dynamic three-dimensional grid data in the virtual fusion scene to the user visual angle center, theta is a separation included angle between the three-dimensional model center corresponding to the dynamic three-dimensional grid data in the virtual fusion scene and the user visual angle center, and c is a preset attenuation function. Therefore, the acoustic sensation information in the virtual fusion scene can be obtained through calculation by the formula (1), the formula (2) and the formula (3). Furthermore, according to the obtained sound sensation information in the virtual fusion scene and in combination with the state information of the user, the tour experience of the user in the sense of hearing can be realized, and the virtual reality scene of the user can be improvedSense of realism and sense of immersion.
For example, as shown in fig. 3, the center of the three-dimensional model corresponding to the dynamic three-dimensional mesh data in the virtual fusion scene includes a virtual scene 1 center and a virtual scene 2 center. Wherein, the distance from the virtual scene 1 center to the user view center is d1, the separation included angle between the virtual scene 1 center and the user view center is θ 1, that is, the spatial attenuation function is
Figure BDA0002950124640000071
Thus, the acoustic sensation information generated for the virtual scene 1 is:
Figure BDA0002950124640000072
Figure BDA0002950124640000073
similarly, the distance from the center of the virtual scene 2 to the center of the user perspective is d2, the separation included angle between the center of the virtual scene 2 and the center of the user perspective is θ 2, and the sound sensation information generated for the virtual scene 2 is:
Figure BDA0002950124640000074
in some embodiments, after fusing the dynamic three-dimensional mesh data to the user virtual space, the method further comprises: obtaining the number of refraction paths from scene light sources in a user virtual space to a user visual angle; acquiring the incident direction and the emergent direction of the light on each refraction path; obtaining the brightness of the scene light source at the visual angle of a user according to the number of the refraction paths, the incident direction and the emergent direction of the light on each refraction path; and controlling the scene light source to provide light with the brightness to the visual angle of a user.
That is, the embodiment of the present invention calculates the optical interaction between the adjacent dynamic three-dimensional mesh data by using the ray tracing technology on the basis of fusing the plurality of dynamic three-dimensional mesh data, so as to further improve the reality of the three-dimensional rendering. For example, as shown in fig. 4, the virtual fusion scene includes two virtual scenes: virtual scene 1, virtual scene 2. For a scene light source, emergent light of the scene light source is refracted on different virtual scenes continuously and then finally enters a user visual angle, and on the basis, a calculation formula for the brightness of the user visual angle is as follows.
Figure BDA0002950124640000075
Figure BDA0002950124640000076
Wherein L is the brightness of the scene light source at the visual angle of the user, LNAs a function of luminance, N is the number of refraction paths incident from the scene source to the user's viewing angle, CiAttenuation ratio, f (w), of the i-th refraction path incident on the user's viewing angle for the scene light sourcei,woP) is the bidirectional reflectance distribution function, p is the luminance point, wiIs the incident direction of the light ray on the refraction path, woIs the exit direction of the light ray on the refraction path, theta is the angle between the incident direction of the light ray on the refraction path and the normal of the curved surface, p (w)i) Is p point pair wiThe probability density of (3) is collected. Therefore, the brightness of the scene light source at the visual angle of the user in the virtual fusion scene can be calculated and obtained through the formula (4) and the formula (5), and the scene light source is controlled to provide light with the brightness to the visual angle of the user, so that the sense of reality of three-dimensional rendering is improved, and the sense of reality and the sense of immersion of the user on the virtual reality scene are further improved.
In a second embodiment of the present invention, a virtual reality device is provided, as shown in fig. 5, a virtual reality device 10 includes a vision acquisition module 1, a sound acquisition module 2, a positioning module 3, a data processing module 4, and a playing module 5.
The visual acquisition module 1 is used for acquiring three-dimensional information and color information of an actual scene; the sound acquisition module 2 is used for acquiring stereo field data of an actual scene; the positioning module 3 is configured to obtain state information of the user in the virtual scene, such as current head orientation information and spatial position information of the user; the data processing module 4 is configured to execute the virtual reality scene fusion and playing method provided in the above embodiment; the playing module 5 is connected with the data processing module 4 and is used for playing the visual information and the visual information of the virtual scene.
In this embodiment, when the virtual reality device 10 presents a virtual scene, a specific implementation manner of the internal data processing module 4 is similar to a specific implementation manner of the virtual reality scene fusion and playing method according to any of the above embodiments of the present invention, for which reference is specifically made to the description of the virtual reality scene fusion and playing method portion, and details are not described here for reducing redundancy.
According to the virtual reality device 10 of the embodiment of the present invention, the data processing module 4 executes the virtual reality scene fusion and playing method provided by the above embodiment, when the user roams in the virtual fusion scene, the playing module 5 is controlled to play the visual information and the visual information of the virtual scene according to the user state information such as the position and the orientation, so as to improve the sense of reality and the sense of immersion of the user on the virtual scene, and achieve the feeling of being personally on the scene.
In some embodiments, the vision acquisition module 1 comprises a depth camera and a color camera.
In particular, depth cameras such as TOF cameras are used to acquire three-dimensional information of the actual scene; color cameras, such as RGB cameras, are used to capture color information of an actual scene.
In some embodiments, the number of the visual capturing modules 1 is multiple, so as to capture multiple different actual scenes, so as to obtain multiple different three-dimensional virtual scenes.
In some embodiments, the playback module 5 comprises a display unit and a sound playback unit.
Specifically, the display unit is connected with the data processing module 4 and is used for displaying the visual information of the virtual scene; the sound playing unit, such as an earphone, is connected to the data processing module 4, and is configured to play the audio information of the virtual scene.
In another embodiment of the invention, the virtual reality device 10 may further include at least one processor and a memory communicatively coupled to the at least one processor.
In an embodiment, a computer program executable by at least one processor is stored in the memory, and when the computer program is executed by the at least one processor, the virtual reality scene fusion and playing method provided by the above embodiments is implemented.
In this embodiment, when the virtual reality device 10 presents a virtual scene, a specific implementation manner of the virtual reality device is similar to a specific implementation manner of the virtual reality scene fusion and playing method according to any of the above embodiments of the present invention, for which reference is specifically made to the description of the virtual reality scene fusion and playing method portion, and details are not repeated here in order to reduce redundancy.
In the description of this specification, any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of custom logic functions or processes, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and that variations, modifications, substitutions and alterations may be made in the above embodiments by those of ordinary skill in the art within the scope of the present invention.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. A virtual reality scene fusion and playing method is characterized by comprising the following steps:
acquiring dynamic three-dimensional grid data of at least one actual scene, and acquiring stereo field data of the at least one actual scene;
fusing the dynamic three-dimensional grid data and the stereo field data to a user virtual space to obtain a virtual fusion scene;
acquiring state information of a user in the virtual fusion scene;
obtaining corresponding target visual information and target sound sensation information in the virtual fusion scene according to the state information;
and playing the target visual information and the target sound sensation information.
2. The virtual reality scene fusion and playing method according to claim 1, wherein obtaining dynamic three-dimensional mesh data of at least one actual scene comprises:
continuously acquiring three-dimensional information of the actual scene and color information of the actual scene;
registering the three-dimensional information and the color information of the corresponding time sequence of the actual scene to obtain three-dimensional color point cloud data;
and performing three-dimensional reconstruction on the three-dimensional color point cloud data to obtain dynamic three-dimensional grid data of the actual scene.
3. The virtual reality scene fusion and playing method according to claim 1, wherein the dynamic three-dimensional mesh data and the stereo field data are fused to a user virtual space to obtain a virtual fusion scene, and the method comprises:
acquiring set position information corresponding to the dynamic three-dimensional grid data in the user virtual space;
and setting the dynamic three-dimensional grid data in the user virtual space according to the set position information so as to obtain visual information in a virtual fusion scene.
4. The virtual reality scene fusion and playing method according to claim 3, wherein the dynamic three-dimensional mesh data and the stereo field data are fused to a user virtual space to obtain a virtual fusion scene, further comprising:
acquiring the distance and the separation included angle from the three-dimensional model center corresponding to the dynamic three-dimensional grid data to the user visual angle center in the virtual fusion scene;
and obtaining the sound sensation information in the virtual fusion scene according to the distance from the three-dimensional model center corresponding to the dynamic three-dimensional grid data in the virtual fusion scene to the user visual angle center, the separation included angle and the head-related transmission function.
5. The virtual reality scene fusion and playing method according to claim 3, wherein after fusing the dynamic three-dimensional mesh data to the user virtual space, the method further comprises:
obtaining the number of refraction paths from the scene light source to the user visual angle in the user virtual space;
acquiring the incident direction and the emergent direction of the light on each refraction path;
obtaining the brightness of the scene light source at the visual angle of the user according to the number of the refraction paths, and the incident direction and the emergent direction of the light on each refraction path;
and controlling the scene light source to provide the light with the light brightness to the user visual angle.
6. A virtual reality device, comprising:
the visual acquisition module is used for acquiring three-dimensional information and color information of an actual scene;
the sound acquisition module is used for acquiring stereo field data of an actual scene;
the positioning module is used for acquiring the state information of the user in the virtual scene;
a data processing module for executing the virtual reality scene fusion and playing method of any one of claims 1 to 5;
and the playing module is connected with the data processing module and is used for playing the visual information and the visual information of the virtual scene.
7. The virtual reality device of claim 6, wherein the visual acquisition module comprises:
the depth camera is used for acquiring three-dimensional information of the actual scene;
and the color camera is used for acquiring color information of the actual scene.
8. The virtual reality device of claim 6, wherein the visual acquisition module is plural.
9. The virtual reality device of claim 6, wherein the playback module comprises:
the display unit is connected with the data processing module and used for displaying the visual information of the virtual scene;
and the sound playing unit is connected with the data processing module and is used for playing the sound sensation information of the virtual scene.
10. A virtual reality device, comprising:
at least one processor;
a memory communicatively coupled to the at least one processor;
the memory stores a computer program executable by the at least one processor, and the at least one processor implements the virtual reality scene fusion and playing method according to any one of claims 1 to 5 when executing the computer program.
CN202110205130.9A 2021-02-24 2021-02-24 Virtual reality scene fusion and playing method and virtual reality equipment Active CN113256815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110205130.9A CN113256815B (en) 2021-02-24 2021-02-24 Virtual reality scene fusion and playing method and virtual reality equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110205130.9A CN113256815B (en) 2021-02-24 2021-02-24 Virtual reality scene fusion and playing method and virtual reality equipment

Publications (2)

Publication Number Publication Date
CN113256815A true CN113256815A (en) 2021-08-13
CN113256815B CN113256815B (en) 2024-03-22

Family

ID=77181414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110205130.9A Active CN113256815B (en) 2021-02-24 2021-02-24 Virtual reality scene fusion and playing method and virtual reality equipment

Country Status (1)

Country Link
CN (1) CN113256815B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953520A (en) * 2023-03-10 2023-04-11 浪潮电子信息产业股份有限公司 Recording and playback method and device for virtual scene, electronic equipment and medium
WO2024001174A1 (en) * 2022-06-29 2024-01-04 中兴通讯股份有限公司 Virtual reality-based data processing method, controller, and virtual reality device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050286758A1 (en) * 2004-06-28 2005-12-29 Microsoft Corporation Color segmentation-based stereo 3D reconstruction system and process employing overlapping images of a scene captured from viewpoints forming either a line or a grid
CN103810353A (en) * 2014-03-09 2014-05-21 杨智 Real scene mapping system and method in virtual reality
US20170004651A1 (en) * 2014-07-24 2017-01-05 Youngzone Culture (Shanghai) Co., Ltd. Augmented reality technology-based handheld viewing device and method thereof
CN106485782A (en) * 2016-09-30 2017-03-08 珠海市魅族科技有限公司 Method and device that a kind of reality scene is shown in virtual scene
CN108074278A (en) * 2016-11-17 2018-05-25 百度在线网络技术(北京)有限公司 Video presentation method, device and equipment
CN108109207A (en) * 2016-11-24 2018-06-01 中安消物联传感(深圳)有限公司 A kind of visualization solid modelling method and system
US10228760B1 (en) * 2017-05-23 2019-03-12 Visionary Vr, Inc. System and method for generating a virtual reality scene based on individual asynchronous motion capture recordings
CN110610536A (en) * 2019-07-15 2019-12-24 北京七展国际数字科技有限公司 Method for displaying real scene for VR equipment
CN111863198A (en) * 2020-08-21 2020-10-30 华北科技学院 Rehabilitation robot interaction system and method based on virtual reality
US20200380031A1 (en) * 2018-07-04 2020-12-03 Tencent Technology (Shenzhen) Company Limited Image processing method, storage medium, and computer device
CN112188382A (en) * 2020-09-10 2021-01-05 江汉大学 Sound signal processing method, device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050286758A1 (en) * 2004-06-28 2005-12-29 Microsoft Corporation Color segmentation-based stereo 3D reconstruction system and process employing overlapping images of a scene captured from viewpoints forming either a line or a grid
CN103810353A (en) * 2014-03-09 2014-05-21 杨智 Real scene mapping system and method in virtual reality
US20170004651A1 (en) * 2014-07-24 2017-01-05 Youngzone Culture (Shanghai) Co., Ltd. Augmented reality technology-based handheld viewing device and method thereof
CN106485782A (en) * 2016-09-30 2017-03-08 珠海市魅族科技有限公司 Method and device that a kind of reality scene is shown in virtual scene
CN108074278A (en) * 2016-11-17 2018-05-25 百度在线网络技术(北京)有限公司 Video presentation method, device and equipment
CN108109207A (en) * 2016-11-24 2018-06-01 中安消物联传感(深圳)有限公司 A kind of visualization solid modelling method and system
US10228760B1 (en) * 2017-05-23 2019-03-12 Visionary Vr, Inc. System and method for generating a virtual reality scene based on individual asynchronous motion capture recordings
US20200380031A1 (en) * 2018-07-04 2020-12-03 Tencent Technology (Shenzhen) Company Limited Image processing method, storage medium, and computer device
CN110610536A (en) * 2019-07-15 2019-12-24 北京七展国际数字科技有限公司 Method for displaying real scene for VR equipment
CN111863198A (en) * 2020-08-21 2020-10-30 华北科技学院 Rehabilitation robot interaction system and method based on virtual reality
CN112188382A (en) * 2020-09-10 2021-01-05 江汉大学 Sound signal processing method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024001174A1 (en) * 2022-06-29 2024-01-04 中兴通讯股份有限公司 Virtual reality-based data processing method, controller, and virtual reality device
CN115953520A (en) * 2023-03-10 2023-04-11 浪潮电子信息产业股份有限公司 Recording and playback method and device for virtual scene, electronic equipment and medium
CN115953520B (en) * 2023-03-10 2023-07-14 浪潮电子信息产业股份有限公司 Recording and playback method and device for virtual scene, electronic equipment and medium

Also Published As

Publication number Publication date
CN113256815B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
US10560687B2 (en) LED-based integral imaging display system as well as its control method and device
US11880932B2 (en) Systems and associated methods for creating a viewing experience
CN112492380B (en) Sound effect adjusting method, device, equipment and storage medium
CN113473159B (en) Digital person live broadcast method and device, live broadcast management equipment and readable storage medium
US5495576A (en) Panoramic image based virtual reality/telepresence audio-visual system and method
WO2019041351A1 (en) Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene
WO2018196469A1 (en) Method and apparatus for processing audio data of sound field
DE112016004640T5 (en) FILMIC MACHINING FOR VIRTUAL REALITY AND EXTENDED REALITY
US20090238378A1 (en) Enhanced Immersive Soundscapes Production
CN113256815B (en) Virtual reality scene fusion and playing method and virtual reality equipment
WO1996021321A1 (en) Virtual reality television system
EP3617871A1 (en) Audio apparatus and method of audio processing
CN109640070A (en) A kind of stereo display method, device, equipment and storage medium
CN112446939A (en) Three-dimensional model dynamic rendering method and device, electronic equipment and storage medium
KR20190031220A (en) System and method for providing virtual reality content
Hong et al. Towards 3D television through fusion of kinect and integral-imaging concepts
JP7054351B2 (en) System to play replay video of free viewpoint video
CN105898562A (en) Virtual reality terminal and method and device for simulating play scene thereof
CN113935907A (en) Method, apparatus, electronic device, and medium for pre-correcting image aberration
US20220036075A1 (en) A system for controlling audio-capable connected devices in mixed reality environments
CN103856777A (en) Video coding and decoding method based on optical field rendering
RU2815366C2 (en) Audio device and audio processing method
CN115187754A (en) Virtual-real fusion method, device, equipment and storage medium
CN207410480U (en) A kind of simulator and sound pick-up outfit
Bouvier et al. Immersive visual and audio world in 3D

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant