CN116506563A - Virtual scene rendering method and device, electronic equipment and storage medium - Google Patents

Virtual scene rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116506563A
CN116506563A CN202211070122.9A CN202211070122A CN116506563A CN 116506563 A CN116506563 A CN 116506563A CN 202211070122 A CN202211070122 A CN 202211070122A CN 116506563 A CN116506563 A CN 116506563A
Authority
CN
China
Prior art keywords
information
camera
rendering
cameras
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211070122.9A
Other languages
Chinese (zh)
Inventor
李锐
李想
吴卓莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Network Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Network Information Technology Co Ltd filed Critical Shenzhen Tencent Network Information Technology Co Ltd
Priority to CN202211070122.9A priority Critical patent/CN116506563A/en
Publication of CN116506563A publication Critical patent/CN116506563A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Abstract

The embodiment of the application discloses a virtual scene rendering method, a virtual scene rendering device, electronic equipment and a storage medium. The method comprises the following steps: acquiring position information of a plurality of cameras; determining inner view cone information of each camera based on the position information of each camera, wherein the inner view cone information characterizes the picture content of a virtual scene watched from the view angle of the camera in the virtual shooting process, and determines outer view cone information corresponding to the virtual scene; if the overlapping of the inner view cone information of at least two target cameras in the display area on the display screen is detected, the outer view cone information is fused with the inner view cone information of each target camera respectively, and fusion information corresponding to each target camera is obtained; and respectively rendering the fusion information corresponding to each target camera, and sequentially displaying the rendered picture content on a display screen. According to the embodiment of the application, the rendering accuracy of the virtual shooting scene can be improved.

Description

Virtual scene rendering method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of virtual reality, in particular to a virtual scene rendering method, device, electronic equipment and storage medium based on multiple cameras in virtual shooting.
Background
In conventional movie production, the content related to the visual special effect is often photographed by using a green screen on site, and then the photographed green screen picture is subjected to post production. This results in a certain degree of mutual cleavage between live shots and post-production, and actors can only perform with respect to the green curtain when live shots are taken, which makes performance difficult. The main creation team can not watch the effect after the visual special effect is finished in real time in the process of shooting on site, so that the field shooting can not be perfectly controlled, and the efficiency of movie production can be reduced to a certain extent.
In order to solve the above problems, a real-time interactive previewing technology appears in the implementation of the prior art, which is based on the shooting environment of a green screen, utilizes a camera system and a real-time rendering engine to generate a virtual picture matched with the motion of a real camera, and synthesizes the virtual picture into a real shooting picture of the camera by real-time matting, and provides the synthesized preview picture for a main creation as a site reference. The real-time interactive previewing technology effectively solves the problem that an creator cannot intuitively feel fusion of a virtual scene and a real world, but causes rendering errors under the condition that shooting ranges of a plurality of cameras are overlapped.
Disclosure of Invention
In order to solve the technical problems, embodiments of the present application provide a virtual scene rendering method, device, electronic device and storage medium based on multiple cameras in virtual shooting, which can improve the rendering accuracy of virtual shooting scenes.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned in part by the practice of the application.
According to an aspect of the embodiments of the present application, there is provided a virtual scene rendering method based on multiple cameras in virtual shooting, including: acquiring position information of a plurality of cameras; determining inner view cone information of each camera based on the position information of each camera, wherein the inner view cone information characterizes the picture content of a virtual scene watched from the view angle of the camera in the virtual shooting process, and determines outer view cone information corresponding to the virtual scene; if the overlapping of the inner view cone information of at least two target cameras in the display area on the display screen is detected, the outer view cone information is fused with the inner view cone information of each target camera respectively, and fusion information corresponding to each target camera is obtained; and respectively rendering the fusion information corresponding to each target camera, and sequentially displaying the rendered picture content on a display screen.
According to an aspect of the embodiments of the present application, there is provided a virtual shooting system, including a genlock signal transmitting device, a rendering device connected to the genlock signal transmitting device, and a plurality of cameras, wherein: the synchronous phase-locked signal transmitting device is used for transmitting a first synchronous phase-locked signal to each camera and transmitting a second synchronous phase-locked signal to the rendering device, wherein the frequency of the second synchronous phase-locked signal is N times of that of the first synchronous phase-locked signal, and N is the number of the cameras; the plurality of cameras respond to the first synchronous phase-locked signal and release the shutter at the same time to shoot images; the rendering device is used for performing picture rendering according to the frequency of the second synchronous phase-locked signal in the process of executing the virtual scene rendering method based on the multiple cameras in the virtual shooting.
According to an aspect of the embodiments of the present application, there is provided a virtual scene rendering apparatus based on multiple cameras in virtual shooting, including: the acquisition module is used for acquiring the position information of each camera; the determining module is used for determining inner view cone information of each camera based on the position information of each camera, wherein the inner view cone information characterizes the picture content of a virtual scene watched from the visual angle of the camera in the virtual shooting process, and determines outer view cone information corresponding to the virtual scene; the fusion module is used for respectively fusing the outer view cone information with the inner view cone information of each target camera to obtain fusion information corresponding to each target camera if the display areas of the inner view cone information of at least two target cameras on the display screen are detected to be overlapped; and the rendering display module is used for respectively rendering the fusion information corresponding to each target camera and sequentially displaying the picture content obtained by rendering on a display screen.
In another exemplary embodiment, the virtual scene rendering device based on multiple cameras in virtual shooting further includes a second fusion module and a second rendering and displaying module, where the second fusion module is configured to fuse the outer cone information with the inner cone information of the non-target camera to obtain remaining fusion information, and the non-target camera is a camera other than the target camera among the multiple cameras; the second rendering and displaying module is used for rendering the rest of the fusion information and displaying the picture content obtained by rendering on a display screen.
In another exemplary embodiment, the virtual scene rendering device based on multiple cameras in virtual shooting further includes a third fusion module and a third rendering and displaying module, where the third fusion module is configured to fuse the outer cone information with the inner cone information of multiple cameras to obtain summarized fusion information if no overlapping of the display areas of the inner cone information of any two cameras on the display screen is detected; the third rendering and displaying module is used for rendering the summarized fusion information and displaying the picture content obtained by rendering on a display screen.
In another exemplary embodiment, the fusion module includes a deletion unit and a fusion unit, where the deletion unit is configured to delete, if it is detected that display areas of the inner cone information of at least two target cameras on the display screen overlap, image information of the display areas of the inner cone information of each target camera on the display screen, and obtain outer cone information after deletion; and the fusion unit is used for fusing the deleted outer view cone information with the inner view cone information of the corresponding target camera to obtain fusion information corresponding to the target camera.
According to an aspect of the embodiments of the present application, there is provided an electronic device including a processor and a memory, the memory having stored thereon computer readable instructions which, when executed by the processor, implement a method as above.
According to one aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform a method as previously provided.
According to an aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the various alternative embodiments described above.
In the technical scheme provided by the embodiment of the application, the inner view cone information of each camera is determined based on the position information of each camera, the outer view cone information corresponding to the virtual scene is determined, if the display area of the inner view cone information of at least two target cameras on the display screen is detected to be overlapped, the outer view cone information is respectively fused with the inner view cone information of each target camera to obtain the fusion information corresponding to each target camera, the fusion information corresponding to each target camera is respectively rendered, and the rendered picture content is sequentially displayed on the display screen.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art. In the drawings:
FIG. 1 is a schematic diagram of a prior art virtual scene rendering presentation shown in an exemplary embodiment;
FIG. 2 is a schematic diagram of a prior art virtual scene rendering presentation shown in another exemplary embodiment;
fig. 3 is a schematic view of pictures taken by two cameras in a virtual shooting scene;
FIG. 4 is a schematic diagram of a virtual scene rendering presentation image shown in an exemplary embodiment of the present application;
FIG. 5 is a flow chart illustrating a method of multi-camera based virtual scene rendering in virtual photography according to an exemplary embodiment of the present application;
FIG. 6 is a flow chart of a method of multi-camera based virtual scene rendering in another virtual shot set forth on the basis of the embodiment shown in FIG. 5;
FIG. 7 is a flow chart of a method of multi-camera based virtual scene rendering in another virtual shot set forth on the basis of the embodiment shown in FIG. 6;
FIG. 8 is a flow chart of an exemplary embodiment of step S104 in the embodiment shown in FIG. 5;
fig. 9 is a block diagram of a virtual photographing system shown in an exemplary embodiment;
fig. 10 is a block diagram of a virtual photographing system shown in another exemplary embodiment of the present application;
FIG. 11 is a flow chart illustrating a method of multi-camera based virtual scene rendering in virtual photography according to another exemplary embodiment of the present application;
FIG. 12 is a timing diagram of a genlock signal shown in an exemplary embodiment;
FIG. 13 is a timing diagram of a genlock signal after doubling processing shown in an exemplary embodiment;
fig. 14 is a block diagram of a multi-camera based virtual scene rendering apparatus in virtual photography according to an exemplary embodiment of the present application;
fig. 15 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
Also to be described is: reference to "a plurality" in this application means two or more than two. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., a and/or B may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
It can be appreciated that in the specific embodiments of the present application, data such as source video, virtual scenes, inner view cone information, outer view cone information, etc. are related, when the embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use and processing of relevant data need to comply with relevant laws and regulations and standards of relevant countries and regions.
First, the following noun interpretations are introduced:
virtual film making: virtual production is a broad term referring to a series of computer-aided production and visual movie production methods, also referred to herein as virtual photography. According to the definition of the virtual production by the research and development team of related technology, the virtual production is a real and digital world blended area, and combines virtual reality and augmented reality with CGI (Computer-Generated Imagery) and game engine technologies, so that producers can see scenes to develop in front of them as if they were being synthesized and photographed in real scenes.
LED screen: a large LED (Light-Emitting Diode) screen is used to display virtual contents in a virtual production shooting site.
On-site shooting camera: the scene shooting camera in the virtual film production can capture the fusion picture of the LED screen and the screen foreground scene at the same time.
genlock: a synchronous phase lock, a system which can realize the synchronization of a plurality of systems and the same synchronous source. The method is used for video playing and shooting. By this technique, a plurality of cameras can be controlled to release shutters at the same timing to photograph the same object. It is also possible to have the display play the picture at the same time and when a genlock signal source is provided, all devices connected to this signal will lock the signal at the same refresh rate.
Outer viewing cone: the specific picture displayed by the LED screen during virtual shooting is usually a picture rendered at a specified position in the engine.
Inner view cone: and updating the picture content of the LED screen according to the position of the field camera in the virtual shooting process, wherein the picture mainly displays the picture content of the virtual scene watched by the angle of the real camera. Only the range seen by the live camera is typically updated, and the non-seen content uses the outer cone of view.
In the virtual shooting scene, the image recorded by the camera is obtained by fusing a background picture displayed by an LED screen and a picture obtained by shooting a real person or object on a shooting site, the camera can record videos from different positions and angles, and the background pictures shot by the camera from different angles or positions are supposed to be different, so that in the process of rendering and displaying the virtual scene on the LED screen, the background picture displayed on the LED screen is required to be updated according to the position information of the camera, and further the accuracy of the background picture shot by the camera is ensured. In order to improve the shooting efficiency, a plurality of cameras are generally set to record videos, and virtual scenes to be rendered and displayed on the LED screen are correspondingly required to be updated according to the positions of the cameras.
The prior art updates and renders virtual scenes displayed on an LED screen by adopting the following modes:
determining the inner view cone information of the corresponding camera from the virtual scene according to the position information of each camera;
calculating the outer cone information of the virtual scene;
fusing all the inner view cone information and the outer view cone information;
rendering and displaying the obtained fusion information on an LED screen.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating a prior art virtual scene rendering presentation in an exemplary embodiment. As shown in fig. 1, the camera comprises two inner cones, namely, an inner cone 1 and an inner cone 2, which correspond to the inner cone information of the two cameras respectively, and further comprises 4 outer cones, namely, an outer cone 1 corresponding to the first display screen, an outer cone 2 corresponding to the second display screen, an outer cone 3 corresponding to the third display screen, and an outer cone 4 corresponding to the fourth display screen, wherein the corresponding outer cone information is different due to the difference between the positions of the first display screen and the fourth display screen. In fig. 1, a part of an inner cone 1 is fused with an outer cone 1 and displayed on a first display screen, and the other part is fused with an outer cone 2 and displayed on a second display screen; a part of the inner cone 2 is fused with the outer cone 3 and displayed on the third display screen, and the other part is fused with the outer cone 4 and displayed on the fourth display screen, that is, the inner cones corresponding to the two cameras are simultaneously displayed on the display screens.
The inventor of the application has long-term research and found that if there are at least two cameras corresponding to the inner cone information and the display areas on the LED display screen are overlapped, only the picture with the forefront hierarchy can be displayed preferentially when the fusion information containing the inner cone information with the overlapped display areas is displayed on the LED screen. That is, if there are at least two overlapping cameras, it is only ensured that the picture of one camera is displayed normally, and the other camera displays the wrong picture due to the overlapping of the pictures.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating a prior art virtual scene rendering presentation in accordance with another exemplary embodiment. As shown in fig. 2, there is an overlapping area, that is, an overlapping portion, between the inner cone 3 and the inner cone 4 on the display screen, in the prior art, if there is an overlapping portion between the inner cone 3 and the inner cone 4 on the display screen, if a display error may occur in the area of the overlapping portion at this time, for example, if the priority display level of each camera is not preset, the overlapping portions of multiple display areas will simultaneously display the inner cone information corresponding to each camera; for another example, if the priority display level of each camera is set in advance, only the cone information of the camera with the highest priority level needs to be displayed at the portion where the plurality of display areas overlap.
Referring to fig. 3, fig. 3 is a schematic diagram of pictures shot by two cameras in a virtual shooting scene. As shown in fig. 3, in the prior art, if only the picture with the forefront level can be preferentially displayed at the picture overlapping portion, when the shooting ranges of the two cameras overlap, that is, when the situation in the embodiment shown in fig. 2 occurs, only the picture of one camera can be guaranteed to be normal, while the other camera can display the wrong picture because of the picture overlapping, if the display level of the camera corresponding to the setting picture 2 is at the forefront and the display level of the camera corresponding to the picture 1 is at the rear, so the picture 1 in fig. 3 displays normally, and the region overlapping with the picture 2 in the picture 1, that is, the region of the wrong picture displays the wrong picture, specifically, the picture displayed in the display region of the wrong picture is the image data at the same position of the picture 2.
In order to solve the technical problems, according to the embodiment of the application, the function of shooting multi-angle pictures is achieved by switching different display pictures at the overlapping part of the pictures, so that the correct pictures can be rendered by different cameras at the overlapping part.
Referring to fig. 4, fig. 4 is a schematic diagram of a virtual scene rendering presentation image according to an exemplary embodiment of the present application. As shown in fig. 4, the display areas of the inner cones 4 and 5 overlap on the display screen, and the present embodiment solves the problem of display errors caused by overlapping the display areas by performing temporal interleaving at the overlapping portions.
Embodiments of the present application specifically provide a virtual scene rendering method based on multiple cameras in virtual shooting, a virtual shooting system, a virtual scene rendering device based on multiple cameras in virtual shooting, an electronic device, and a computer-readable storage medium, and these embodiments are described in detail below.
Referring to fig. 5, fig. 5 is a flowchart of a virtual scene rendering method based on multiple cameras in virtual shooting according to an exemplary embodiment of the present application, and as shown in fig. 5, the virtual scene rendering method based on multiple cameras in virtual shooting provided in the present embodiment includes steps S101 to S104, and the detailed description refers to the following:
step S101: positional information of a plurality of cameras is acquired.
In the prior art, the inner cone information of a plurality of cameras at the same moment is simultaneously rendered and displayed on the LED screen, but the inner cone information of different moments is not simultaneously rendered and displayed on the LED screen, so that the embodiment obtains the position information of the plurality of cameras at the same moment, so that the inner cone information of each camera is conveniently determined based on the position information of the plurality of cameras at the same moment.
Exemplary positional information of the camera includes positional coordinates, euler angles, and field angles of the camera. The Euler angle of the camera is used for determining 3 independent angle parameters of the fixed-point rotation rigid body position, and the independent angle parameters consist of nutation angle, rotation angle and rotation angle. The three angles represent the rotation angles of the camera about three axes of the coordinate system, respectively. In the optical instrument, a lens of the optical instrument is taken as a vertex, and an included angle formed by two edges of the maximum range of the lens, which can be passed through by an object image of a measured object, is called a field angle. The size of the angle of view determines the field of view of the optical instrument, and the larger the angle of view, the larger the field of view and the smaller the optical magnification. Colloquially, the target object beyond this angle will not be caught in the lens.
Step S102: and determining inner view cone information of each camera based on the position information of each camera, and determining outer view cone information corresponding to the virtual scene.
Since the cone information characterizes the picture content of the virtual scene viewed from the viewing angle of the camera in the virtual shooting process, the present embodiment determines the cone information of each camera based on the position information of each camera. As previously exemplified, the position information of the camera includes the position coordinates, euler angles, and view angles of the camera.
The cone of view information for each camera may be determined by:
a three-dimensional scene is projected onto a virtual screen.
In this embodiment, the virtual screen is a virtual display screen for carrying perspective projection of a three-dimensional scene in the rendering apparatus. After perspective projection is carried out on the three-dimensional scene, the picture presented on the virtual screen is the outer cone information.
And performing perspective projection on the three-dimensional scene based on the position information of the camera, wherein the picture displayed on the virtual screen is the inner view cone information. In this embodiment, perspective projection transformation is performed on the virtual scene based on the position coordinates, euler angles, and field angles of each camera to obtain the cone information of each camera.
First, the position coordinates of a camera are projected onto a virtual screen to obtain projection coordinates; a first sub-scene is determined in the three-dimensional scene based on the projection coordinates and the euler angles of the camera. And determining a second sub-scene in the first sub-scene based on the field angle of the camera, and taking image information corresponding to the second sub-scene as the inner view cone information of the corresponding camera.
Step S103: if the inner view cone information of at least two target cameras is detected to be overlapped in the display area on the display screen, the outer view cone information is fused with the inner view cone information of each target camera, and fusion information corresponding to each target camera is obtained.
In the present embodiment, the display area of the cone information of the camera on the display screen corresponds to the position information of the corresponding camera, that is, the position information of the camera determines the display area of the cone information on the display screen, and the display area of the cone information of the camera on the display screen is determined, for example, according to the position coordinates, the euler angle, and the field angle of the camera. Illustratively, the three-dimensional scene is subjected to perspective projection based on the position information of the camera, and the picture position presented on the virtual screen is the display position of the corresponding camera on the display screen.
For example, it is detected whether or not the display areas of the cone information of the respective cameras on the display screen overlap in the form of two-by-two detection. For example, if the three-camera-included cone information is included, the display areas corresponding to the three-camera-included cone information are detected to overlap each other, and a total of 3 pieces of overlap detection is performed.
In order to improve the fault tolerance of detecting whether the display areas of the inner cone information of each camera are overlapped on the display screen, for example, if the overlapping area of the display areas of the inner cone information of at least two target cameras detected on the display screen is larger than a preset area, the display areas of the inner cone information of at least two target cameras are overlapped on the display screen is determined.
In this embodiment, if it is detected that the display areas of the inner cone information of at least two target cameras overlap on the display screen, it is described that if the inner cone information of each target camera is rendered and displayed on the display screen at the same time, a display error may be caused, for example, if the priority display level of each camera is not preset, the overlapping portions of the display areas will simultaneously display the inner cone information corresponding to each target camera; for another example, if the priority display level of each camera is set in advance, only the cone information of the camera with the highest priority level needs to be displayed at the portion where the plurality of display areas overlap.
In order to avoid the occurrence of the display error, in this embodiment, the outer view cone information is respectively fused with the inner view cone information of each target camera, so as to obtain the fusion information corresponding to each target camera. The present embodiment does not limit the manner in which the outer cone information is fused with the inner cone information of each target camera, respectively.
The inner view cone information and the outer view cone information are fused according to the display area of the inner view cone information corresponding to the target camera on the display screen, for example, the image information of the display area of the inner view cone information of the target camera included in the fused information is replaced by the inner view cone information to obtain corresponding fused information.
Step S104: and respectively rendering the fusion information corresponding to each target camera, and sequentially displaying the rendered picture content on a display screen.
In this embodiment, since each picture content obtained by rendering the fusion information corresponding to each target camera includes only the cone information of one target camera, and each picture content is not displayed on the display screen at the same time, display errors are not caused.
It should be noted that, the present embodiment does not limit the rendering sequence of rendering the fusion information corresponding to each target camera and the display sequence of sequentially displaying the rendered picture content on the display screen.
The method includes the steps of firstly, respectively rendering the fusion information corresponding to each target camera to obtain picture contents rendered by each target camera, and then sequentially displaying the picture contents rendered by each target camera on a display screen.
The method comprises the steps of (1) rendering fusion information corresponding to each target camera in sequence, and displaying picture content rendered by the corresponding target camera on a display screen; repeating the above two steps for the fusion information corresponding to each target camera until the above steps are completed for the fusion information corresponding to all the target cameras, for example, step S103 obtains the fusion information corresponding to two target cameras, firstly renders the fusion information of one target camera, and then renders the fusion information of the other target camera after displaying the rendered picture content on the display screen, and displaying the rendered picture content on the display screen.
As can be seen from the foregoing, in the virtual scene rendering method based on multiple cameras in virtual shooting provided in this embodiment, inner view cone information of each camera is determined based on position information of each camera, and outer view cone information corresponding to a virtual scene is determined, if overlapping of display areas of inner view cone information of at least two target cameras on a display screen is detected, the outer view cone information is respectively fused with inner view cone information of each target camera to obtain fused information corresponding to each target camera, the fused information corresponding to each target camera is respectively rendered, and rendered picture content is sequentially displayed on the display screen.
In another exemplary embodiment, after the screen content rendered for the fusion information corresponding to each target camera is displayed on the display screen, notification information is sent to the corresponding target camera, the notification information instructing the target camera to take an image. Through the mode, the camera can be guaranteed to shoot images in the time period of rendering and displaying the corresponding inner view cone information on the display screen, and further errors of a background picture of image shooting are avoided.
In this embodiment, when it is detected that the display areas of the inner cone information of at least two target cameras overlap on the display screen, the fusion information including the inner cone information of each target camera is respectively rendered and displayed on the display screen, that is, the inner cone information of each target camera is not simultaneously rendered and displayed on the display screen, so that in order to ensure that the target camera performs image capturing while the corresponding inner cone image is rendered and displayed on the display screen, and further ensure that the corresponding target camera performs image capturing with a correct background image, in this embodiment, notification information is sent to the corresponding target camera after the image content obtained by rendering the fusion information corresponding to each target camera is displayed on the display screen.
Referring to fig. 6, fig. 6 is a flowchart illustrating another embodiment of a multi-camera-based virtual scene rendering method in another virtual shot proposed on the basis of the embodiment illustrated in fig. 5. As shown in fig. 6, the virtual scene rendering method based on multiple cameras in virtual shooting provided in this embodiment further includes steps S201 to S202, and the following is referred to for a detailed description:
step S201: and fusing the outer cone information with the inner cone information of the non-target camera to obtain the rest fused information.
In the present embodiment, the non-target camera is a camera other than the target camera among the plurality of cameras. Therefore, the display areas of the inner cone information of the non-target cameras are not overlapped, in this case, in order to improve the rendering efficiency, the embodiment directly fuses the outer cone information with the inner cone information of each non-target camera to obtain the rest of fused information, and in this embodiment, the inner cone information of all the non-target cameras is included in the rest of fused information.
The inner view cone information and the outer view cone information are fused according to the display area of the inner view cone information corresponding to each non-target camera on the display screen, for example, the image information of the display area of the inner view cone information of the non-target camera included in the fused information is replaced by the inner view cone information of each non-target camera, so that corresponding rest fused information is obtained. Because the image information of the display area of the inner view cone information of the non-target camera is not included in the rest fusion information, on one hand, rendering display errors can be avoided, and on the other hand, rendering and calculation force resources can be reduced.
Step S202: rendering the rest of the fusion information, and displaying the rendered picture content on a display screen.
In this embodiment, since the remaining fusion information includes the cone information corresponding to all the non-target cameras, the cone information corresponding to all the non-target cameras can be displayed on the display screen at the same time by rendering the remaining fusion information and displaying the rendered image content on the display screen.
When the display areas of the inner cone information of at least two target cameras are detected to be overlapped, the outer cone information is respectively fused with the inner cone information of each target camera to obtain fusion information corresponding to each target camera, the fusion information corresponding to each target camera is respectively rendered, the rendered picture content is sequentially displayed on the display screen, the inner cone information with overlapped display areas is sequentially rendered and displayed on the display screen, display errors caused by the fact that the inner cone information of all the target cameras is simultaneously rendered and displayed on the display screen can be avoided, and accuracy of rendering and displaying a virtual scene is provided; in addition, the outer cone information and the inner cone information of the non-target camera are fused to obtain other fused information, the other fused information is rendered, and the rendered picture content is displayed on the display screen.
It can be understood that in this embodiment, the outer cone information and the inner cone information of the non-target cameras may be fused to obtain the rest of fused information, the rest of fused information is rendered, the rendered picture content is displayed on the display screen, then the outer cone information and the inner cone information of each target camera are fused respectively to obtain the fused information corresponding to each target camera, the fused information corresponding to each target camera is rendered respectively, and the rendered picture content is sequentially displayed on the display screen; or, the outer cone information is fused with the inner cone information of each target camera to obtain fusion information corresponding to each target camera, the fusion information corresponding to each target camera is rendered, the rendered picture content is displayed on a display screen in sequence, then the outer cone information is fused with the inner cone information of the non-target camera to obtain the rest of fusion information, the rest of fusion information is rendered, and the rendered picture content is displayed on the display screen without specific limitation.
Illustratively, after the screen content rendered for the remaining fusion information is presented on the display screen, notification information is sent to all the non-target cameras, which in this embodiment instruct all the non-target cameras to take an image. By the method, the non-target camera can be enabled to shoot images when the corresponding inner view cone picture is rendered and displayed on the display screen, and further the non-target camera is enabled to shoot images with the correct background picture.
Referring to fig. 7, fig. 7 is a flowchart of another embodiment of a method for rendering a virtual scene based on multiple cameras in another virtual shooting according to the embodiment shown in fig. 6, and as shown in fig. 7, the method for rendering a virtual scene based on multiple cameras in a virtual shooting according to the embodiment further includes steps S301 to S302, and the detailed description refers to the following:
step S301: if the overlapping of the inner cone information of any two cameras in the display area on the display screen is not detected, the outer cone information and the inner cone information of the plurality of cameras are fused, and summarized and fused information is obtained.
In this embodiment, if the overlapping of the display areas of the inner cone information of any two cameras on the display screen is not detected, it is stated that if the inner cone information of each target camera is rendered and displayed on the display screen at the same time, display errors are not caused.
The inner view cone information and the outer view cone information are fused according to the display area of the inner view cone information corresponding to each target camera on the display screen, for example, the image information of the display area of the inner view cone information of the target camera included in the fused information is replaced by the inner view cone information of each target camera, so that corresponding summarized fused information is obtained. Because the image information of the display area of the inner view cone information of the target camera is not included in the summarized fusion information, on one hand, rendering display errors can be avoided, on the other hand, rendering and calculation force resources can be reduced, and rendering efficiency is improved.
Step S302: rendering the summarized fusion information, and displaying the rendered picture content on a display screen.
In this embodiment, since the summary fusion information includes the cone information corresponding to all the target cameras, the summary fusion information is rendered, and the rendered image content is displayed on the display screen, so that the cone information corresponding to all the target cameras is displayed on the display screen at the same time.
According to the method, when the display areas of the inner cone information of any two cameras are not detected to be overlapped on the display screen, the outer cone information and the inner cone information of the plurality of cameras are fused to obtain summarized fusion information, the summarized fusion information is rendered, and the picture content obtained by rendering is displayed on the display screen.
Referring to fig. 8, fig. 8 is a flowchart of an exemplary embodiment of step S103 in the embodiment shown in fig. 5, and as shown in fig. 8, step S103 includes steps S401 to S402, and the following is referred to for details:
Step S401: and if the display areas of the inner cone information of at least two target cameras on the display screen are detected to be overlapped, deleting the image information of the display areas of the inner cone information of each target camera on the display screen, and obtaining the deleted outer cone information.
Step S402: and fusing the deleted outer cone information with the inner cone information of the corresponding target camera to obtain fusion information corresponding to the target camera.
Considering that the image information of the display area of the inner view cone information of the target camera is included in the outer view cone information, the embodiment deletes the image information of the display area of the inner view cone information of the target camera to obtain the deleted outer view cone information, fuses the deleted outer view cone information with the inner view cone information corresponding to the target camera to obtain the fused information corresponding to the target camera, and because the fused information does not include the image information of the display area of the inner view cone information of the target camera, on one hand, rendering display errors can be avoided, and on the other hand, rendering and calculation resources can be reduced.
Referring to fig. 9, fig. 9 is a block diagram of a virtual shooting system shown in an exemplary embodiment, and as shown in fig. 9, the virtual shooting system provided in this embodiment includes a genlock signal transmitting device, a rendering device connected to the genlock signal transmitting device, and a plurality of cameras, where:
The synchronous phase-locked signal transmitting device is used for transmitting a first synchronous phase-locked signal to each camera and transmitting a second synchronous phase-locked signal to the rendering device, and the plurality of cameras respond to the first synchronous phase-locked signal and release the shutters at the same time to shoot images; the rendering device is configured to perform picture rendering according to the frequency of the second lock phase signal in the process of executing the virtual scene rendering method based on multiple cameras in the virtual shooting provided in any one of the embodiments.
In this embodiment, the frequency of the second pll is N times that of the first pll, where N is the number of cameras, for example, if the number of cameras is 5 and the frequency of the first pll is 25, the frequency of the second pll is 25×5=125. The purpose of this arrangement is to ensure that the cone information of the respective camera can be rendered by the rendering device at different times and presented on the display screen.
Illustratively, the rendering device is configured to obtain position information of a plurality of cameras; based on the position information of each camera, determining the inner view cone information of each camera, wherein the inner view cone information characterizes the picture content of a virtual scene watched from the visual angle of the camera in the virtual shooting process, and determining the outer view cone information corresponding to the virtual scene; if the overlapping of the inner view cone information of at least two target cameras in the display area on the display screen is detected, the outer view cone information is fused with the inner view cone information of each target camera respectively, and fusion information corresponding to each target camera is obtained; and rendering the fusion information corresponding to each target camera based on the second synchronous phase-locked signals, and sequentially displaying the rendered picture content on a display screen, wherein the frequency of the second synchronous phase-locked signals is K+1 times that of the first synchronous phase-locked signals, and K is the number of the target cameras.
In this embodiment, the genlock signal transmitting device and the rendering device may be a smart phone, a tablet computer, a PC (Persona lComputer, a personal computer), an intelligent voice interaction device, an intelligent home appliance, a vehicle-borne terminal, or other electronic devices, which are not limited in this regard. The server 30 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDN (Content Delivery Network ), basic cloud computing services such as big data and artificial intelligence platform, and the like, which is not limited herein.
The virtual shooting system provided in this embodiment further includes a position acquisition device corresponding to each camera, where the position acquisition device is configured to acquire position information of the corresponding camera.
The virtual shooting system further includes a display processor corresponding to the display device, and the display processor is configured to sequentially display the rendered image content on the display screen based on the second genlock signal. It will be appreciated that the display processor may be a processing device separate from the display device and may represent a component of the display device having processing capabilities, and is not specifically limited herein.
Referring to fig. 10, fig. 10 is a block diagram of a virtual photographing system according to another exemplary embodiment of the present application, and as shown in fig. 10, the virtual photographing system includes a genlock signal transmitting device, a rendering device, a first camera, a second camera, a display processor, a display screen, a first position information acquiring device, and a second position information acquiring device, wherein the first position information acquiring device is connected to the first camera, the second position information acquiring device is connected to the second camera, the rendering device is connected to the first position information acquiring device and the second position information acquiring device, the display processor is connected to the rendering device and the display screen, respectively, and the genlock signal transmitting device is connected to the rendering device, the first camera, the second camera, the display processor, the first position information acquiring device, and the second position information acquiring device.
The synchronous phase-locked signal sending device is used for sending a genlock signal to the first camera and the second camera, so that the first camera and the second camera perform image shooting in response to the genlock signal, the synchronous phase-locked signal sending device is also used for sending the genlock signal to the first position information obtaining device and the second position information obtaining device, so that the first position information obtaining device and the second position information obtaining device obtain the position information of the first camera and the position information of the second camera in response to the genlock signal, the synchronous phase-locked signal sending device is used for sending the genlock signal to the rendering device, the rendering device determines inner cone information of the first camera and the second camera based on the position information of the first camera and the position information of the second camera, determines outer cone information corresponding to a virtual scene, fuses the outer cone information with inner cone information of the first camera and the second camera respectively, fusion information corresponding to each camera is obtained, the genlock signal is used for sending the genlock signal to the display processor, the display processor performs doubling processing on the genlock signal, and the rendering information is sequentially rendered on the respective camera after doubling processing and the genlock signal is displayed on the screen respectively.
In this example, the genlock signal after doubling is 2 times the original genlock signal.
Referring to fig. 11, fig. 11 is a flowchart of a virtual scene rendering method based on multiple cameras in virtual shooting according to another exemplary embodiment of the present application, and as shown in fig. 11, the virtual scene rendering method based on multiple cameras in virtual shooting provided in the present embodiment includes the following steps:
the hardware environment is confirmed.
In this embodiment, the hardware environment of the virtual shooting is confirmed, and the first camera, the second camera, the position acquisition device, the rendering device, and the display processor are confirmed to be connected to the genlock signal transmission device.
The software environment is validated.
In the present embodiment, the software environment of the virtual shooting is confirmed. That is, the first camera, the second camera, the position acquisition device, the rendering device, and the display processor are confirmed to link the genlock signal transmitted by the genlock signal transmitting device.
Setting the frame rate of the synchronous phase-locked signal transmitting device.
In the present embodiment, the frame rate of the genlock signal transmitting apparatus indicates a shooting frame rate for each camera, an original rendering frame rate of the rendering apparatus, and the like, which are set in advance.
And setting an engine.
In this embodiment, the genlock signal is doubled. The genlock signal after the doubling process is 2 times the original genlock signal.
Referring to fig. 12, fig. 12 is a timing chart of a genlock signal according to an exemplary embodiment, and as shown in fig. 12, the genlock sends a 25fps genlock signal to an engine side (i.e., a rendering device), and the engine side processes a 50fps signal of 25×2, and referring to fig. 13, fig. 13 is a timing chart of a genlock signal according to an exemplary embodiment after doubling, since a level of the genlock signal is high or low, only one of the changes is default, such as from high to low to one of the cycles, and the embodiment considers both high to low and low to high as a change in frame rate, so that a twice frame rate can be obtained.
In this embodiment, if the current frame is an odd frame signal, it is determined that the current frame signal should render and display the inner cone information corresponding to the first camera, and if the current frame is an even frame signal, the current frame signal should render and display the inner cone information corresponding to the second camera. Specifically, if the current frame is determined to be an odd frame signal, the inner cone and the outer cone of the first camera (i.e., camera 1) are calculated, and if the current frame is determined to be an even frame signal, the inner cone and the outer cone of the second camera (i.e., camera 2) are calculated.
And overlapping the inner and outer viewing cones.
After the inner and outer cones are overlapped, the overlapped inner and outer cones are displayed by a display processor based on the genlock signal after doubling.
The camera 1 and the camera 2 perform image capturing based on the genlock signal, respectively.
In the present embodiment, the camera 1 shoots using a genlock signal, and the camera 2 shoots using a genlock signal offset by 1/NS, where n=2.
The virtual scene rendering method based on multiple cameras in the virtual shooting realizes confirmation of the hardware environment and the software environment of the virtual shooting, further ensures that the cameras, the position acquisition device, the rendering device and the display processor of the virtual shooting system can all receive the genlock signal sent by the synchronous phase-locked signal sending device, and further provides support of software and hardware for rendering and displaying the virtual scene; in addition, in the case of the optical fiber,
fig. 14 is a block diagram of a virtual scene rendering device based on multiple cameras in virtual shooting according to an exemplary embodiment of the present application, and as shown in fig. 14, a virtual scene rendering device 500 based on multiple cameras in virtual shooting includes an acquisition module 501, a determination module 502, a fusion module 503, and a rendering presentation module 504.
The acquiring module 501 is configured to acquire position information of each camera; the determining module 502 is configured to determine, based on the position information of each camera, inner view cone information of each camera, where the inner view cone information characterizes picture content of a virtual scene viewed from a view angle of the camera in a virtual shooting process, and determine outer view cone information corresponding to the virtual scene; the fusion module 503 is configured to, if it is detected that the display areas of the inner cone information of at least two target cameras overlap on the display screen, fuse the outer cone information with the inner cone information of each target camera, respectively, so as to obtain fusion information corresponding to each target camera; the rendering and displaying module 504 is configured to render the fusion information corresponding to each target camera, and sequentially display the rendered frame content on the display screen.
In another exemplary embodiment, the virtual scene rendering device 500 based on multiple cameras in virtual shooting further includes a second fusion module and a second rendering and displaying module, where the second fusion module is configured to fuse the outer cone information with the inner cone information of the non-target camera to obtain remaining fusion information, and the non-target camera is a camera other than the target camera among the multiple cameras; the second rendering and displaying module is used for rendering the rest of the fusion information and displaying the picture content obtained by rendering on a display screen.
In another exemplary embodiment, the virtual scene rendering device 500 based on multiple cameras in virtual shooting further includes a third fusion module and a third rendering and displaying module, where the third fusion module is configured to fuse the outer cone information with the inner cone information of multiple cameras to obtain summarized fusion information if no overlapping of the display areas of the inner cone information of any two cameras on the display screen is detected; the third rendering and displaying module is used for rendering the summarized fusion information and displaying the picture content obtained by rendering on a display screen.
In another exemplary embodiment, the fusion module 503 includes a deletion unit and a fusion unit, where the deletion unit is configured to delete, if it is detected that display areas of the inner cone information of at least two target cameras on the display screen overlap, image information of the display areas of the inner cone information of each target camera on the display screen, and obtain outer cone information after deletion; and the fusion unit is used for fusing the deleted outer view cone information with the inner view cone information of the corresponding target camera to obtain fusion information corresponding to the target camera.
It should be noted that, the apparatus provided in the foregoing embodiments and the method provided in the foregoing embodiments belong to the same concept, and the specific manner in which each module and unit perform the operation has been described in detail in the method embodiments, which is not repeated herein.
In another exemplary embodiment, the application provides an electronic device comprising a processor and a memory, wherein the memory has stored thereon computer readable instructions that when executed by the processor implement a multi-camera based virtual scene rendering method in a virtual shot as before. In this embodiment, the electronic device includes, but is not limited to, a mobile phone, a computer, an intelligent voice interaction device, an intelligent home appliance, a vehicle-mounted terminal, and the like.
Fig. 15 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
It should be noted that, the computer system 1000 of the electronic device shown in fig. 15 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 15, the computer system 1000 includes a central processing unit (Central Processing Unit, CPU) 1001 that can perform various appropriate actions and processes, such as performing the information recommendation method in the above-described embodiment, according to a program stored in a Read-Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a random access Memory (Random Access Memory, RAM) 1003. In the RAM 1003, various programs and data required for system operation are also stored. The CPU 1001, ROM 1002, and RAM 1003 are connected to each other by a bus 1004. An Input/Output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output portion 1007 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and a speaker; a storage portion 1008 including a hard disk or the like; and a communication section 1009 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The drive 1010 is also connected to the I/O interface 1005 as needed. A removable medium 1011, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed as needed in the drive 1010, so that a computer program read out therefrom is installed as needed in the storage section 1008.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1009, and/or installed from the removable medium 1011. When executed by a Central Processing Unit (CPU) 1001, the computer program performs various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
It will be appreciated that in the specific embodiments of the present application, related data such as user information is referred to, and when the embodiments of the present application are applied to specific products or technologies, user permission or consent is required, and the collection, use and processing of related data is required to comply with relevant laws and regulations and standards of relevant countries and regions.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
Another aspect of the present application also provides a computer-readable storage medium having stored thereon computer-readable instructions that, when executed by a processor, implement a method of virtual scene rendering in virtual photography based on multiple cameras as in any of the previous embodiments.
Another aspect of the present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the virtual scene rendering method based on multiple cameras in the virtual photographing provided in the above embodiments.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The foregoing is merely a preferred exemplary embodiment of the present application and is not intended to limit the embodiments of the present application, and those skilled in the art may make various changes and modifications according to the main concept and spirit of the present application, so that the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A virtual scene rendering method based on multiple cameras in virtual shooting is characterized by comprising the following steps:
acquiring position information of a plurality of cameras;
determining inner view cone information of each camera based on the position information of each camera, wherein the inner view cone information characterizes the picture content of a virtual scene watched from the visual angle of the camera in the virtual shooting process, and determines outer view cone information corresponding to the virtual scene;
if the inner view cone information of at least two target cameras is detected to be overlapped in the display area on the display screen, the outer view cone information is fused with the inner view cone information of each target camera respectively, and fusion information corresponding to each target camera is obtained;
And respectively rendering the fusion information corresponding to each target camera, and sequentially displaying the rendered picture content on the display screen.
2. The method according to claim 1, wherein the method further comprises:
fusing the outer view cone information with the inner view cone information of a non-target camera to obtain the rest fused information, wherein the non-target camera is a camera except the target camera in the plurality of cameras;
rendering the rest of the fusion information, and displaying the rendered picture content on the display screen.
3. The method according to claim 2, wherein the method further comprises:
if the overlapping of the inner cone information of any two cameras in the display area of the display screen is not detected, fusing the outer cone information with the inner cone information of the plurality of cameras to obtain summarized fusion information;
rendering the summarized fusion information, and displaying the rendered picture content on the display screen.
4. The method according to claim 1, wherein the method further comprises:
and after the picture content obtained by rendering the fusion information corresponding to each target camera is displayed on the display screen, sending notification information to the corresponding target camera, wherein the notification information indicates the target camera to shoot images.
5. The method of claim 1, wherein the position information includes position coordinates, euler angles, and field angles of the camera; the determining the inner view cone information of each camera from the virtual scene based on the position information of each camera comprises the following steps:
and performing perspective projection transformation on the virtual scene based on the position coordinates, euler angles and view angles of each camera so as to obtain the inner view cone information of each camera.
6. The method according to claim 1, wherein if the overlapping of the display areas of the inner cone information of at least two target cameras on the display screen is detected, the fusing the outer cone information with the inner cone information of each target camera respectively, and obtaining the fused information corresponding to each target camera includes:
if the display areas of the inner cone information of at least two target cameras on the display screen are detected to be overlapped, deleting the image information of the display areas of the inner cone information of each target camera on the display screen, and obtaining the deleted outer cone information;
and fusing the deleted outer cone information with the inner cone information of the corresponding target camera to obtain fusion information corresponding to the target camera.
7. The virtual shooting system is characterized by comprising a synchronous phase-locked signal sending device, a rendering device and a plurality of cameras, wherein the rendering device is connected with the synchronous phase-locked signal sending device, and the cameras are arranged in the virtual shooting system, and the synchronous phase-locked signal sending device comprises:
the synchronous phase-locked signal transmitting device is used for transmitting a first synchronous phase-locked signal to each camera and transmitting a second synchronous phase-locked signal to the rendering device, wherein the frequency of the second synchronous phase-locked signal is N times that of the first synchronous phase-locked signal, and N is the number of the cameras;
the plurality of cameras release shutters at the same time to take images in response to the first synchronization phase-locked signal;
the rendering device is configured to perform, in performing the method of any one of claims 1 to 6, picture rendering according to the frequency of the second genlock signal.
8. The system of claim 7, wherein the genlock signal transmitting means transmits the first genlock signal to each camera at 1/N second intervals.
9. A virtual scene rendering apparatus based on multiple cameras in virtual shooting, comprising:
the acquisition module is used for acquiring the position information of each camera;
The device comprises a determining module, a camera shooting module and a camera shooting module, wherein the determining module is used for determining inner view cone information of each camera based on position information of each camera, the inner view cone information represents picture content of a virtual scene watched from a visual angle of the camera in a virtual shooting process, and determining outer view cone information corresponding to the virtual scene;
the fusion module is used for respectively fusing the outer view cone information with the inner view cone information of each target camera to obtain fusion information corresponding to each target camera if the display areas of the inner view cone information of at least two target cameras on the display screen are detected to be overlapped;
and the rendering display module is used for respectively rendering the fusion information corresponding to each target camera and sequentially displaying the picture content obtained by rendering on the display screen.
10. An electronic device, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored in a memory to perform the method of any one of claims 1-6.
11. A computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any of claims 1-6.
CN202211070122.9A 2022-08-31 2022-08-31 Virtual scene rendering method and device, electronic equipment and storage medium Pending CN116506563A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211070122.9A CN116506563A (en) 2022-08-31 2022-08-31 Virtual scene rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211070122.9A CN116506563A (en) 2022-08-31 2022-08-31 Virtual scene rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116506563A true CN116506563A (en) 2023-07-28

Family

ID=87327182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211070122.9A Pending CN116506563A (en) 2022-08-31 2022-08-31 Virtual scene rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116506563A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761017A (en) * 2023-08-18 2023-09-15 湖南马栏山视频先进技术研究院有限公司 High availability method and system for video real-time rendering

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761017A (en) * 2023-08-18 2023-09-15 湖南马栏山视频先进技术研究院有限公司 High availability method and system for video real-time rendering
CN116761017B (en) * 2023-08-18 2023-10-17 湖南马栏山视频先进技术研究院有限公司 High availability method and system for video real-time rendering

Similar Documents

Publication Publication Date Title
US11727644B2 (en) Immersive content production system with multiple targets
JP6471777B2 (en) Image processing apparatus, image processing method, and program
KR102013978B1 (en) Method and apparatus for fusion of images
CN109064545B (en) Method and device for data acquisition and model generation of house
US20110158509A1 (en) Image stitching method and apparatus
US11095871B2 (en) System that generates virtual viewpoint image, method and storage medium
CN106296589B (en) Panoramic image processing method and device
CN112312111A (en) Virtual image display method and device, electronic equipment and storage medium
CN113115110B (en) Video synthesis method and device, storage medium and electronic equipment
CN116506563A (en) Virtual scene rendering method and device, electronic equipment and storage medium
CN113838116B (en) Method and device for determining target view, electronic equipment and storage medium
KR102108246B1 (en) Method and apparatus for providing video in potable device
CN116320363A (en) Multi-angle virtual reality shooting method and system
CN115830224A (en) Multimedia data editing method and device, electronic equipment and storage medium
US11825191B2 (en) Method for assisting the acquisition of media content at a scene
US20210297649A1 (en) Image data output device, content creation device, content reproduction device, image data output method, content creation method, and content reproduction method
CN116320364B (en) Virtual reality shooting method and display method based on multi-layer display
EP3934260A1 (en) Transport of a movie in multiple frame rates to a film auditorium
JP5646033B2 (en) Image display device and image display method
CN114640838B (en) Picture synthesis method and device, electronic equipment and readable storage medium
CN116800737A (en) Virtual shooting method, device and equipment
CN114885146A (en) Large screen-based multi-machine-position virtual fusion method and system
CN116962652A (en) XR virtual shooting method and device and XR media server
CN113870165A (en) Image synthesis method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40091464

Country of ref document: HK