CN117115395A - Fusion method, device, equipment and medium of virtual reality and real scene - Google Patents

Fusion method, device, equipment and medium of virtual reality and real scene Download PDF

Info

Publication number
CN117115395A
CN117115395A CN202210541980.0A CN202210541980A CN117115395A CN 117115395 A CN117115395 A CN 117115395A CN 202210541980 A CN202210541980 A CN 202210541980A CN 117115395 A CN117115395 A CN 117115395A
Authority
CN
China
Prior art keywords
virtual reality
scene
scene information
real
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210541980.0A
Other languages
Chinese (zh)
Inventor
周昱溦
赵文珲
杨扬
涂孟琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210541980.0A priority Critical patent/CN117115395A/en
Publication of CN117115395A publication Critical patent/CN117115395A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a fusion method, a device, equipment and a medium of virtual reality and a real scene, wherein the method comprises the following steps: acquiring real scene information acquired by a plurality of first cameras; generating fusion scene information according to the virtual reality scene information and the real scene information; and displaying the fusion scene information. According to the application, the seamless splicing effect of the virtual and the reality is realized by combining the real scene in the virtual reality scene, so that more real virtual feeling is brought to the user, and conditions are provided for improving the use experience of the user.

Description

Fusion method, device, equipment and medium of virtual reality and real scene
Technical Field
The embodiment of the application relates to the technical field of virtual reality, in particular to a method, a device, equipment and a medium for fusing virtual reality and a real scene.
Background
Virtual Reality (VR) is a product in modern technological development, and can deeply fuse technologies such as computer technology, electronic information technology, simulation technology and the like, so as to simulate a Virtual environment and provide immersion feeling for people.
Currently, people watch virtual scenes or characters, etc. by wearing VR devices, so that users experience VR effects. However, these virtual scenes or figures are designed in advance or rendered according to a specific algorithm, and are not combined with the real scenes, so that a more real virtual feeling cannot be brought to the user. Therefore, how to combine virtual reality and real scenes becomes a problem to be solved.
Disclosure of Invention
The application provides a fusion method, device, equipment and medium of virtual reality and real scenes, which realize the seamless splicing effect of the virtual reality and the reality, thereby bringing more real virtual feeling to users and providing conditions for improving the use experience of the users.
In a first aspect, an embodiment of the present application provides a method for fusing virtual reality and a real scene, including:
acquiring real scene information acquired by a plurality of first cameras;
generating fusion scene information according to the virtual reality scene information and the real scene information;
and displaying the fusion scene information.
In a second aspect, an embodiment of the present application provides a fusion device of virtual reality and a real scene, including:
the information acquisition module is used for acquiring real scene information acquired by the plurality of first cameras;
The information fusion module is used for generating fusion scene information according to the virtual reality scene information and the real scene information;
and the information display module is used for displaying the fusion scene information.
In a third aspect, an embodiment of the present application provides an electronic device, including:
the system comprises a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for calling and running the computer program stored in the memory so as to execute the fusion method of the virtual reality and the real scene.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, configured to store a computer program, where the computer program causes a computer to execute the method for fusing virtual reality and real scenes according to the embodiment of the first aspect.
The technical scheme disclosed by the embodiment of the application has the following beneficial effects:
and generating fusion scene information according to the virtual reality scene information and the real scene information by acquiring the real scene information acquired by the plurality of first cameras, and displaying the fusion scene information. Therefore, the seamless splicing effect of the virtual and the reality is realized by combining the real scene in the virtual reality scene, so that more real virtual feeling is brought to the user, and conditions are provided for improving the use experience of the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a fusion method of virtual reality and real scene provided by an embodiment of the application;
fig. 2 is a flow chart of another fusion method of virtual reality and real scene provided by the embodiment of the application;
fig. 3 is a schematic flow chart of setting up a virtual reality scene and calibrating a camera in the virtual reality scene according to an embodiment of the present application;
fig. 4 is a schematic block diagram of a fusion device of virtual reality and real scene provided by an embodiment of the present application;
fig. 5 is a schematic block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The virtual scene or the character is watched by people through wearing the VR equipment, so that when the user experiences the VR effect, the virtual scene or the character displayed by the VR equipment is designed in advance or rendered according to a specific algorithm, and the virtual scene is not combined, so that the user cannot feel more truly virtual. Therefore, the application designs a fusion method of virtual reality and real scenes so as to realize the effect of fusing the virtual reality and the reality, thereby bringing more real virtual feeling to users.
In order to facilitate understanding of the embodiments of the present application, before describing the embodiments of the present application, some concepts related to all embodiments of the present application are explained appropriately, specifically as follows:
1) Virtual Reality (VR) is a technology of creating and experiencing a Virtual world, calculating and generating a Virtual environment, which is a multi-source information (the Virtual Reality mentioned herein at least comprises visual perception, and may also comprise auditory perception, tactile perception, motion perception, and even taste perception, olfactory perception, etc.), implementing a fused, interactive three-dimensional dynamic view of the Virtual environment and simulation of entity behavior, immersing a user in the simulated Virtual Reality environment, and implementing applications in various Virtual environments such as maps, games, videos, education, medical treatment, simulation, collaborative training, sales, assistance in manufacturing, maintenance, and repair.
2) A virtual reality device (VR device) may be provided in the form of glasses, a head mounted display (Head Mount Display, abbreviated as HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of the virtual reality device is not limited thereto, and may be further miniaturized or enlarged according to actual needs.
Optionally, the virtual reality device described in the embodiments of the present application may include, but is not limited to, the following types:
2.1 Computer-side virtual reality (PCVR) equipment, which utilizes the PC side to perform the related computation of the virtual reality function and data output, and external computer-side virtual reality equipment utilizes the data output by the PC side to realize the effect of virtual reality.
2.2 Mobile virtual reality device, supporting the setting of a mobile terminal (e.g., a smart phone) in various ways (e.g., a head mounted display provided with a dedicated card slot), performing related calculations of virtual reality functions by the mobile terminal through wired or wireless connection with the mobile terminal, and outputting data to the mobile virtual reality device, e.g., viewing virtual reality video through the APP of the mobile terminal.
2.3 The integrated virtual reality device has a processor for performing the related computation of the virtual function, so that the integrated virtual reality device has independent virtual reality input and output functions, does not need to be connected with a PC end or a mobile terminal, and has high use freedom.
3) The virtual Field Of View (FOV) is a Field Of View (Field Of View) Of a virtual environment perceived by a user through a lens in a virtual reality device, and the perceived area is represented by the Field Of View (FOV).
4) A virtual reality scene is a virtual scene that an application program displays (or provides) while running on a virtual reality device. The present application specifically refers to a scene in which virtual reality scene information is presented by using a virtual reality technology. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual scene, or a pure fictional virtual scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, the virtual scene may include sky, land, sea, etc., and the land may include environmental elements such as deserts, cities, etc.
The following describes in detail a method for fusing virtual reality and real scene provided by the embodiment of the application with reference to the accompanying drawings.
Fig. 1 is a flow chart of a fusion method of virtual reality and real scene provided by an embodiment of the application. The embodiment of the application is applicable to a scene fusing virtual reality and a real scene, and the virtual reality and real scene fusing method can be executed by a virtual reality and real scene fusing device so as to realize control over the virtual reality and real scene fusing process. The fusion device of the virtual reality and the real scene can be composed of hardware and/or software and can be integrated in the electronic equipment. In the embodiment of the present application, the electronic device is preferably a VR device. Wherein the VR device may be, but is not limited to: VR glasses, VR helmet, VR eye-shade or VR all-in-one etc..
As shown in fig. 1, the fusion method of the virtual reality and the real scene includes the following steps:
s101, acquiring real scene information acquired by a plurality of first cameras.
The first camera is a real camera arranged in a real scene. For example, a 3D camera, a normal camera, or a high definition camera, etc.
The real scene refers to any real scene, such as a built real stage, etc.
In the present embodiment, the real scene information includes: a real scene picture, or a real scene video, etc.
For example, before S101 is executed, an equal proportion of virtual reality scenes may be constructed based on the real scenes, and each second camera in the constructed virtual reality scenes is calibrated, so as to obtain a final virtual reality scene. The final virtual reality scene is then installed into the electronic device to enable the electronic device to present corresponding virtual reality scene information based on the virtual reality scene. In this embodiment, the virtual reality scene information is designed in advance and configured in the virtual scene electronic device.
Calibrating each second camera in the built virtual reality scene, specifically calibrating the initial position and/or the initial angle of each second camera.
After calibrating each second camera in the virtual reality scene, the electronic device further stores the current position and/or the current angle of each second camera so as to provide basis for obtaining corresponding real scene information based on the current position and/or the current angle of each second camera.
In this embodiment, the virtual reality scene information includes: virtual reality scene pictures, or virtual reality scene videos, etc.
The second camera specifically refers to a virtual camera, and the type of the virtual camera is set according to the type of the first camera.
It should be noted that, setting up a virtual reality scene and calibrating each second camera in the virtual reality scene will be described in detail in the following embodiments of setting up a virtual reality scene and processing the virtual reality scene, which is not described in detail herein.
When the installed virtual reality scene is used, the electronic equipment can acquire real scene information acquired by the plurality of first cameras from the real scene end so as to lay a foundation for subsequent generation of fusion scene information based on the real scene information.
As an optional implementation manner, when the embodiment of the application acquires the real scene information acquired by the plurality of first cameras from the real scene end, the implementation may be realized by the following steps:
S11, acquiring the current position and/or the current angle of each second camera in a virtual reality scene, wherein the virtual reality scene is built based on a real scene, and the initial position and/or the initial angle of each second camera in the virtual reality scene is the same as the initial position and/or the initial angle of each first camera in the real scene.
The current position of the second camera is the calibrated position.
Similarly, the current angle of the second camera is the calibrated angle.
After calibrating the initial position and/or the initial angle of each second camera in the virtual reality scene, the electronic device stores the calibrated current position and/or the calibrated current angle of each second camera. Therefore, the electronic equipment can acquire the current position and/or the current angle of each second camera from the storage unit of the electronic equipment.
S12, according to the current position and/or the current angle of each second camera, the position and/or the angle of the first camera corresponding to each second camera in the real scene are adjusted.
And considering that the corresponding relation exists between each second camera in the virtual reality scene and any first camera in the real scene. In order to keep the position and/or angle of each second camera in the virtual reality scene consistent with the position and/or angle of the corresponding first camera in the real scene, the application can adjust the position and/or angle of the first camera corresponding to each second camera in the real scene according to the current position and/or current angle of each second camera.
For example, when the position and/or angle of the first camera corresponding to each second camera in the real scene is adjusted according to the current position and/or current angle of each second camera, the first camera associated with each second camera may be found out from all the first cameras according to the identification information of each second camera. And then, according to the current position and/or the current angle of each second camera, adjusting the position and/or the angle of the searched first camera.
The identification information refers to information capable of uniquely determining the identity of the camera, such as a serial number, a name or the like, and the application is not particularly limited.
S13, acquiring real scene information acquired by each first camera according to the adjusted position and/or angle.
After the position and/or angle of each first camera in the real scene are adjusted, each first camera automatically collects real scene information in real time according to the adjusted position and/or angle. And the acquired real scene information can be sent to the electronic equipment, so that the electronic equipment can perform fusion operation of virtual reality and real scenes based on the real scene information acquired by the first camera.
The method for acquiring the real scene information acquired by each first camera according to the adjusted position and/or angle in the application can be realized in the modes of active pushing or passive pulling, and the like, and is not particularly limited.
That is, after each first camera in the real scene collects the real scene information, the collected real scene information may be actively transmitted to the electronic device, or the collected real scene information may be transmitted to the electronic device when an acquisition request transmitted by the electronic device is received.
S102, generating fusion scene information according to the virtual reality scene information and the real scene information.
For example, the present application may first determine virtual reality scene information corresponding to each real scene information, and then generate fusion scene information according to the virtual reality scene information and the real scene information having the correspondence.
Considering that the virtual reality scene information can be a picture or a video and the real scene information is a picture or a video, the method can comprise the following steps of:
Mode one
If the real scene information and the virtual reality scene information are videos, fusing the real scene video and the virtual reality scene video according to a preset virtual reality space model to generate fused scene information.
In this embodiment, the preset virtual reality space model is one model in the built virtual reality space.
The scene information is fused, and specifically refers to a fused scene video. I.e. virtual and real combined video.
As an optional implementation manner, the method fuses the real scene video and the virtual reality scene video, which may be to superimpose the real scene video and the virtual reality scene video according to a preset virtual reality space model, so as to generate the video under the fused scene. I.e. fusion scene video or virtual-real combined video.
It should be noted that, for the method and the device for overlaying the real scene video and the virtual reality scene video according to the preset virtual reality space, the generated fusion scene information can be determined based on the user angle and the virtual reality device, and the two angles specifically include the following steps:
for the angle of the virtual reality device, the virtual device superimposes the real scene video and the virtual reality scene video according to a preset virtual reality space, and the generated fusion scene information is specifically the fusion scene video.
For a user angle, the virtual device superimposes the real scene video and the virtual reality scene video according to a preset virtual reality space, and generates fusion scene information into a rendered fusion scene video, wherein the rendered fusion scene video is specifically a virtual reality space under the virtual reality and the real scene. I.e. the fusion display space.
In order to clearly illustrate the present embodiment, the present embodiment is exemplified below.
For example, the real scene video is superimposed into the virtual reality scene video according to a preset virtual reality space model, so as to obtain a virtual reality space under the virtual-real combined scene.
Also for example: the virtual reality environment comprises a sphere space which can be 180 degrees or 360 degrees, and the real scene video and the virtual reality scene video can be overlapped according to the sphere space, so that a fusion display space of the virtual reality scene and the real scene is obtained.
Mode two
If the real scene information and the virtual reality scene information are pictures, fusing the real scene picture and the virtual reality scene picture according to a preset virtual reality space model to generate fused scene information.
The scene information is fused, and specifically refers to a scene picture. I.e. virtual and real combined pictures.
As an optional implementation manner, the method fuses the real scene picture and the virtual reality scene picture, which may be to superimpose the real scene picture and the virtual reality scene picture according to a preset virtual reality space model, so as to generate the picture under the fused scene. I.e. fusion of scene pictures or virtual-real combined pictures.
It should be noted that, for the method and the device for overlaying the real scene picture and the virtual reality scene picture according to the preset virtual reality space, the generated fusion scene information can be determined based on the user angle and the virtual reality device, and the two angles specifically include the following steps:
for the angle of the virtual reality device, the virtual device superimposes the real scene picture and the virtual reality scene picture according to a preset virtual reality space, and the generated fusion scene information is specifically the fusion scene picture.
For a user angle, the virtual device superimposes the real scene picture and the virtual reality scene picture according to a preset virtual reality space, and generates fusion scene information as a rendered fusion scene picture, wherein the rendered fusion scene picture is specifically a virtual reality space under the virtual reality and the real scene. I.e. the fusion display space.
For example, the real scene picture is superimposed into the virtual reality scene picture according to a preset virtual reality space model, so as to obtain a virtual reality space under the virtual reality combined scene.
It should be noted that the above two ways are merely exemplary of the embodiments of the present application, and are not meant to be a specific limitation thereof.
S103, displaying the fusion scene information.
Illustratively, after generating the fused scene information, the present application sends the fused scene information to the display to cause the display to render and display the fused scene information.
That is, when the fused scene information is a fused video, the display may display the fused video; when the fused scene information is a fused picture, the display may display the fused picture. Therefore, the user can experience the effect of the fusion of the real scene and the virtual reality scene, and the seamless splicing of the real scene and the virtual reality scene is achieved.
It is contemplated that when the fused scene information is a fused video, the format in which the electronic device displays the video is typically 180 degrees or 360 degrees. The video format is displayed in a hemispherical form when it is 180 degrees, and in an entire spherical form when it is 360 degrees.
Based on the above description, when the fusion video is displayed, the fusion video can be restored on the surface of the hemisphere or the whole sphere, so that the display information has a 3D depth effect, and the immersion feeling provided for the user is stronger.
In some possible implementation manners of the present application, when the electronic device display displays the fused scene information, the display may have different emphasis areas (i.e., target areas) when displaying the fused scene information corresponding to each camera due to different positions and/or angles of each camera. Therefore, when the fusion scene information is displayed, the target area can be determined from the fusion scene information; and displaying the target area.
As an optional implementation manner, when determining a target area from the fusion scene information, the application can cut the fusion scene according to a preset area to obtain the target area.
The preset area can be determined according to the effective pictures of cameras with different positions. In this embodiment, the camera refers to the first camera or the second camera.
For example, if the effective picture of a camera for capturing the distant view is an upper right region in the picture, the upper right region is utilized to clip the fusion scene information corresponding to the camera, so as to obtain an upper right region, and the upper right region obtained by clipping is displayed through a display.
It can be understood that the target area is cut out from the fused scene information and displayed, so that the virtual reality scene fused with the real scene information can be effectively prevented from being penetrated, and the fused display effect of the virtual reality scene and the real scene is further improved.
According to the fusion method of the virtual reality and the real scene, the real scene information acquired by the plurality of first cameras is acquired, and the fusion scene information is generated and displayed according to the virtual reality scene information and the real scene information. Therefore, the seamless splicing effect of the virtual and the reality is realized by combining the real scene in the virtual reality scene, so that more real virtual feeling is brought to the user, and conditions are provided for improving the use experience of the user.
The following scenario schemes to which the present disclosure may be applied are described in connection with one scenario implementation:
with the development of virtual reality technology, a performer can perform virtual reality performance by using the technology, for example, starting a virtual reality concert, and for a spectator, by using a virtual reality device, an experience similar to a real concert is obtained in an immersive manner. For example, through the technical scheme provided by the embodiment of the disclosure, the real video streams under different positions of the performer can be collected through each first camera in the real scene, and the real video streams are sent to the virtual reality equipment at the audience side. After receiving the real video stream sent by each first camera, the virtual reality device at the audience end determines a virtual reality video stream corresponding to each real video stream according to the position of each first camera. And fusing each real video stream and each virtual reality video stream according to a preset virtual reality space model to generate a fused scene video corresponding to each second camera in the virtual reality space. And then rendering the generated fusion scene video and displaying the fusion scene video in a display of the virtual reality device in a fusion display space mode. In addition, the virtual reality device can provide technologies including auditory perception, tactile perception, motion perception, even taste perception, olfactory perception and the like, and realize the simulation of the fusion of virtual environments and interactive three-dimensional dynamic vision and entity behaviors, so that a user is immersed into the simulated virtual reality environment, and a performer can perform performance in the virtual reality environment.
As can be seen from the above description, the present application generates the fusion scene information according to the real scene information and the virtual reality scene information, and displays the fusion scene, so as to achieve the effect of fusion between the virtual and the reality.
On the basis of the embodiment, when the fusion scene information is displayed, the application can also switch the view angles of the fusion scene information so as to meet the use requirement of a user for watching the fusion scene information. The following describes a process of performing view angle switching on the fused scene information according to an embodiment of the present application with reference to fig. 2.
As shown in fig. 2, the fusion method of the virtual reality and the real scene includes the following steps:
s201, acquiring real scene information acquired by a plurality of first cameras.
S202, generating fusion scene information according to the virtual reality scene information and the real scene information.
S203, displaying the fusion scene information.
S204, receiving a view angle switching instruction.
S205, according to the view angle switching instruction, performing view angle switching on the fusion scene information.
For example, the user may send the view angle switching instruction to the electronic device through a gesture switching instruction, a voice switching instruction, or other manners, so that the electronic device performs view angle switching on the fused scene information according to the received view angle switching instruction.
Or, the view angle switching file is preconfigured in the electronic equipment, so that the electronic equipment obtains the view angle switching file when the integrated scene information is displayed, and controls the electronic equipment to perform view angle switching on the integrated scene information according to the view angle information in the view angle switching file according to all view angle information in the view angle switching file, which displays the integrated scene information.
That is, the application can control the electronic device to switch the visual angle of the displayed fusion scene information in a manual or automatic switching mode, so that a user can not miss any important information, thereby meeting the requirement that the user views the fusion scene information under different visual angles, and effectively improving the viewing experience of the user.
According to the fusion method of the virtual reality and the real scene, the real scene information acquired by the plurality of first cameras is acquired, and the fusion scene information is generated and displayed according to the virtual reality scene information and the real scene information. Therefore, the seamless splicing effect of the virtual and the reality is realized by combining the real scene in the virtual reality scene, so that more real virtual feeling is brought to the user, and conditions are provided for improving the use experience of the user. In addition, the view angle switching is performed on the fused scene information according to the view angle switching instruction, so that a user cannot miss important information, the fused scene information under different view angles can be watched by the user, and therefore the user watching experience is effectively improved.
In the following, with reference to fig. 3, an implementation process of setting up an equal proportion of virtual reality scenes based on real scenes and calibrating each second camera in the set up virtual reality scenes to obtain a final virtual reality scene is described.
It should be noted that the implementation process may be performed by any device having a data processing function, for example, the device may be a desktop computer, a notebook computer, a palm computer, or the like.
As shown in fig. 3, the implementation process may include the following steps:
s301, generating a virtual reality scene according to the real scene, wherein a second camera with the same position and/or angle as the first camera in the real scene is arranged in the virtual reality scene.
For example, the real scene data may be obtained by data acquisition of the real scene using a camera or a scanning device. Then, according to real scene data acquired by a camera or a scanning device, building a virtual reality scene with the ratio of 1:1 to the real scene.
Considering that a plurality of first cameras may be deployed in a real scene, so as to acquire real scene information under different angles. Therefore, in order to realize complete reproduction of a real scene, the embodiment of the application can set the second cameras with the same position and/or angle in the built virtual reality scene according to the deployment position and/or angle of the first camera in the real scene.
S302, controlling a third camera to acquire initial real scene information according to preset acquisition information.
In the embodiment of the application, the preset acquisition information refers to information for calibrating each second camera in the constructed virtual reality scene. The preset acquisition information can be adaptively set according to actual application requirements. Such as acquisition location and/or acquisition angle, etc., which are not particularly limited herein.
Wherein the initial real scene information may be selected as the real scene image. The initial real scene information in this embodiment may be selected as a camera preview. The camera preview refers to a real scene graph temporarily photographed by a first camera.
The third camera may be any camera other than the first camera and the second camera. And the third camera is the same type as the first camera and the second camera.
Considering that the initial position and/or the initial angle of each second camera in the built virtual reality scene may not meet the actual viewing requirements of the user. Therefore, the initial position and/or the initial angle of each second camera in the virtual reality scene need to be calibrated, so that the calibrated position and/or angle of each second camera meets the viewing requirement of the user.
In addition, when the initial position and/or the initial angle of each second camera in the virtual reality scene are calibrated, calibration materials are needed. And the calibration material may be a real scene image (i.e., a camera preview). Therefore, according to the embodiment, the third camera is controlled to acquire at least one initial real scene information according to the preset acquisition information, so as to lay a foundation for calibrating the position and/or angle of each second camera.
Specifically, when the third camera is controlled according to preset acquisition information, the specific implementation process is as follows: firstly, acquiring preset acquisition information. And secondly, according to each piece of preset acquisition information, adjusting the position and/or angle of the third camera. And finally, controlling the adjusted third camera to acquire at least one piece of initial real scene information at the current position and/or the current angle.
It should be noted that, when the number of the initial real scene information acquired by the adjusted third camera is multiple, one with the best quality (for example, the highest definition) is selected from the multiple initial real scene information at each current position and/or current angle as the final initial real scene information in the embodiment of the present application, so that the calibration accuracy and reliability of the positions and/or angles of the second cameras are further improved.
And S303, calibrating the position and/or angle of each second camera in the virtual reality scene according to the information of each initial real scene.
After the initial real scene information acquired by the third camera at each current position and/or current angle is acquired, the position and/or angle of the corresponding second camera in the virtual reality scene can be calibrated according to each initial real scene information.
As an optional implementation manner, when calibrating the position and/or angle of the corresponding second camera in the virtual reality scene according to each piece of initial real scene information, the second camera corresponding to each piece of initial real scene information can be determined first, and then the position and/or angle of the corresponding second camera can be continuously adjusted based on the content of each piece of initial real scene information, so as to achieve the purpose of calibrating each second camera in the virtual reality scene.
When determining the second cameras corresponding to each piece of initial real scene information, determining the second camera closest to the current position and/or the current angle of each third camera from all the second cameras according to the current position and/or the current angle of the third camera corresponding to each piece of initial real scene information.
In addition, according to the initial real scene information, the position and/or angle of each second camera in the virtual reality scene are calibrated, and the screen position and/or angle of the corresponding hemisphere or the whole sphere of the display in the virtual reality device can be calibrated, so that the adjusted screen can display information with proper position and size.
After calibrating the position and/or angle of each second camera in the built virtual reality scene, the method can configure corresponding virtual reality scene information for each second camera in the virtual reality equipment scene. The virtual reality scene information may be a virtual reality scene picture, or a virtual reality scene video, which is not specifically limited herein.
Further, after the virtual reality scene information is configured, the virtual reality scene can be packaged and installed in the virtual reality equipment, so that the virtual reality equipment presents corresponding virtual reality scene information based on the virtual reality scene.
According to the embodiment of the application, the real scene data are acquired to achieve the virtual reality scene with the same proportion according to the real scene data, and the second camera in the virtual reality scene is calibrated, so that conditions are provided based on the combination of the virtual reality scene and the real scene.
The following describes a fusion device of virtual reality and real scene according to an embodiment of the present application with reference to fig. 4. Fig. 4 is a schematic block diagram of a fusion device of virtual reality and real scene provided by an embodiment of the application.
The fusion device 400 of the virtual reality and the real scene includes: an information acquisition module 410, an information fusion module 420, and an information display module 430.
The information acquisition module 410 is configured to acquire real scene information acquired by the plurality of first cameras;
the information fusion module 420 is configured to generate fusion scene information according to virtual reality scene information and the real scene information;
and the information display module 430 is configured to display the fused scene information.
An optional implementation manner of the embodiment of the present application, the information obtaining module 410 is specifically configured to;
acquiring the current position and/or the current angle of each second camera in a virtual reality scene, wherein the virtual reality scene is built based on a real scene, and the initial position and/or the initial angle of the second cameras in the virtual reality scene are the same as the initial position and/or the initial angle of the first cameras in the real scene;
According to the current position and/or the current angle of each second camera, the position and/or the angle of a first camera corresponding to each second camera in the real scene are adjusted;
and acquiring real scene information acquired by each first camera according to the adjusted position and/or angle.
In an optional implementation manner of the embodiment of the present application, if the real scene information and the virtual reality scene information are videos, the information fusion module 420 is specifically configured to:
and fusing the real scene video and the virtual reality scene video according to a preset virtual reality space model to generate fused scene information.
In an optional implementation manner of the embodiment of the present application, if the real scene information and the virtual reality scene information are pictures, the information fusion module 420 is specifically configured to:
and fusing the real scene picture and the virtual reality scene picture according to a preset virtual reality space model to generate fused scene information.
An optional implementation manner of the embodiment of the present application, the information display module 430 is specifically configured to:
determining a target area from the fusion scene information;
and displaying the target area.
An optional implementation manner of the embodiment of the present application, the information display module 430 is further configured to:
and cutting the fusion scene information according to a preset area to obtain the target area.
An optional implementation manner of the embodiment of the present application, the apparatus 400 further includes: the instruction receiving module and the switching module;
the instruction receiving module is used for receiving a visual angle switching instruction;
and the switching module is used for switching the view angle of the fusion scene information according to the view angle switching instruction.
According to the technical scheme provided by the embodiment of the application, the real scene information acquired by the plurality of first cameras is acquired, the fusion scene information is generated according to the virtual reality scene information and the real scene information, and the fusion scene information is displayed. Therefore, the seamless splicing effect of the virtual and the reality is realized by combining the real scene in the virtual reality scene, so that more real virtual feeling is brought to the user, and conditions are provided for improving the use experience of the user.
It should be understood that apparatus embodiments and method embodiments may correspond with each other and that similar descriptions may refer to the method embodiments. To avoid repetition, no further description is provided here. Specifically, the apparatus 400 shown in fig. 4 may perform the method embodiment corresponding to fig. 1, and the foregoing and other operations and/or functions of each module in the apparatus 400 are respectively for implementing the corresponding flow in each method in fig. 1, and are not further described herein for brevity.
The apparatus 400 of the embodiment of the present application is described above in terms of functional modules with reference to the accompanying drawings. It should be understood that the functional module may be implemented in hardware, or may be implemented by instructions in software, or may be implemented by a combination of hardware and software modules. Specifically, each step of the method embodiment in the embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in a software form, and the steps of the method disclosed in connection with the embodiment of the present application may be directly implemented as a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the above method embodiments.
Fig. 5 is a schematic block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 5, the electronic device 500 may include: a processor 510, a memory 520, and a display 530;
The memory 520 is connected to the processor 510, and is used for storing a computer program;
the display 530 is connected to the processor 510, and is configured to display a fused scene;
the processor 510 is configured to invoke and run a computer program stored in the memory 520 to execute the fusion method of virtual reality and real scene according to the embodiment of the first aspect.
In other words, the processor 510 may call and run a computer program from the memory 520 to implement the method of the embodiment of the present application.
For example, the processor 510 may be configured to perform the above-described method embodiments according to instructions in the computer program.
In some embodiments of the application, the processor 510 may include, but is not limited to:
a general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some embodiments of the application, the memory 520 includes, but is not limited to:
volatile memory and/or nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DR RAM).
In some embodiments of the application, the computer program may be partitioned into one or more modules that are stored in the memory 520 and executed by the processor 510 to perform the methods provided by the application. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions, which are used to describe the execution of the computer program in the electronic device.
As shown in fig. 5, the electronic device 500 may further include:
a transceiver 540, the transceiver 540 being connectable to the processor 510 or the memory 520.
The processor 510 may control the transceiver 540 to communicate with other devices, and in particular, may send information or data to other devices or receive information or data sent by other devices. Transceiver 540 may include a transmitter and a receiver. Transceiver 540 may further include antennas, the number of which may be one or more.
It will be appreciated that the various components in the electronic device are connected by a bus system that includes, in addition to a data bus, a power bus, a control bus, and a status signal bus.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. Alternatively, embodiments of the present application also provide a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of the method embodiments described above.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. For example, functional modules in various embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. The fusion method of the virtual reality and the real scene is characterized by comprising the following steps of:
acquiring real scene information acquired by a plurality of first cameras;
Generating fusion scene information according to the virtual reality scene information and the real scene information;
and displaying the fusion scene information.
2. The method of claim 1, wherein obtaining real scene information acquired by the plurality of first cameras comprises:
acquiring the current position and/or the current angle of each second camera in a virtual reality scene, wherein the virtual reality scene is built based on a real scene, and the initial position and/or the initial angle of the second cameras in the virtual reality scene are the same as the initial position and/or the initial angle of the first cameras in the real scene;
according to the current position and/or the current angle of each second camera, the position and/or the angle of a first camera corresponding to each second camera in the real scene are adjusted;
and acquiring real scene information acquired by each first camera according to the adjusted position and/or angle.
3. The method of claim 1, wherein if the real scene information and the virtual reality scene information are video, generating fusion scene information from the virtual reality scene information and the real scene information comprises:
And fusing the real scene video and the virtual reality scene video according to a preset virtual reality space model to generate fused scene information.
4. The method of claim 1, wherein if the real scene information and the virtual reality scene information are pictures, generating fusion scene information from the virtual reality scene information and the real scene information comprises:
and fusing the real scene picture and the virtual reality scene picture according to a preset virtual reality space model to generate fused scene information.
5. The method of claim 1, wherein displaying the fused scene information comprises:
determining a target area from the fusion scene information;
and displaying the target area.
6. The method of claim 5, wherein determining a target region from the fused scene information comprises:
and cutting the fusion scene information according to a preset area to obtain the target area.
7. The method of any one of claims 1-6, further comprising:
receiving a visual angle switching instruction;
and switching the view angle of the fusion scene information according to the view angle switching instruction.
8. A fusion device of virtual reality and real scene, comprising:
the information acquisition module is used for acquiring real scene information acquired by the plurality of first cameras;
the information fusion module is used for generating fusion scene information according to the virtual reality scene information and the real scene information;
and the information display module is used for displaying the fusion scene information.
9. An electronic device, comprising:
a processor and a memory for storing a computer program, the processor for invoking and running the computer program stored in the memory to perform the method of fusion of virtual reality and real scenes of any of claims 1-7.
10. A computer-readable storage medium storing a computer program for causing a computer to execute the fusion method of virtual reality and real scene according to any one of claims 1 to 7.
CN202210541980.0A 2022-05-17 2022-05-17 Fusion method, device, equipment and medium of virtual reality and real scene Pending CN117115395A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210541980.0A CN117115395A (en) 2022-05-17 2022-05-17 Fusion method, device, equipment and medium of virtual reality and real scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210541980.0A CN117115395A (en) 2022-05-17 2022-05-17 Fusion method, device, equipment and medium of virtual reality and real scene

Publications (1)

Publication Number Publication Date
CN117115395A true CN117115395A (en) 2023-11-24

Family

ID=88797147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210541980.0A Pending CN117115395A (en) 2022-05-17 2022-05-17 Fusion method, device, equipment and medium of virtual reality and real scene

Country Status (1)

Country Link
CN (1) CN117115395A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118312634A (en) * 2024-06-07 2024-07-09 北京升哲科技有限公司 Virtual reality image and data asset management method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118312634A (en) * 2024-06-07 2024-07-09 北京升哲科技有限公司 Virtual reality image and data asset management method and device

Similar Documents

Publication Publication Date Title
CN106210861B (en) Method and system for displaying bullet screen
CN108139592B (en) Scaling related method and apparatus
CN106598229B (en) Virtual reality scene generation method and device and virtual reality system
US9485493B2 (en) Method and system for displaying multi-viewpoint images and non-transitory computer readable storage medium thereof
CN109829977A (en) Method, apparatus, electronic equipment and the medium in room are seen in virtual three-dimensional space
US20180068489A1 (en) Server, user terminal device, and control method therefor
US20170186243A1 (en) Video Image Processing Method and Electronic Device Based on the Virtual Reality
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
CN115103175B (en) Image transmission method, device, equipment and medium
CN117115395A (en) Fusion method, device, equipment and medium of virtual reality and real scene
US10394336B2 (en) Ocular focus sharing for digital content
JP2021508133A (en) Mapping pseudo-hologram providing device and method using individual video signal output
CN116828245A (en) Video switching method, device, apparatus, medium, and program
US20240013404A1 (en) Image processing method and apparatus, electronic device, and medium
US20240020910A1 (en) Video playing method and apparatus, electronic device, medium, and program product
CN117369677A (en) Cursor position determining method, device, equipment and medium
CN117354567A (en) Bullet screen adjusting method, bullet screen adjusting device, bullet screen adjusting equipment and bullet screen adjusting medium
CN118118717A (en) Screen sharing method, device, equipment and medium
CN116193246A (en) Prompt method and device for shooting video, electronic equipment and storage medium
CN117459745A (en) Information interaction method, device, electronic equipment and storage medium
CN117641040A (en) Video processing method, device, electronic equipment and storage medium
CN117376591A (en) Scene switching processing method, device, equipment and medium based on virtual reality
CN117115237A (en) Virtual reality position switching method, device, storage medium and equipment
CN116069974A (en) Virtual reality equipment and video playing method
CN118474467A (en) Bullet screen display method, bullet screen display device, bullet screen display equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination