CN114845147B - Screen rendering method, display screen synthesizing method and device and intelligent terminal - Google Patents

Screen rendering method, display screen synthesizing method and device and intelligent terminal Download PDF

Info

Publication number
CN114845147B
CN114845147B CN202210466757.4A CN202210466757A CN114845147B CN 114845147 B CN114845147 B CN 114845147B CN 202210466757 A CN202210466757 A CN 202210466757A CN 114845147 B CN114845147 B CN 114845147B
Authority
CN
China
Prior art keywords
virtual camera
screen
virtual
rendering
screen area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210466757.4A
Other languages
Chinese (zh)
Other versions
CN114845147A (en
Inventor
王视鎏
黄毅然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202210466757.4A priority Critical patent/CN114845147B/en
Publication of CN114845147A publication Critical patent/CN114845147A/en
Application granted granted Critical
Publication of CN114845147B publication Critical patent/CN114845147B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a screen rendering method, a display screen synthesizing method device and an intelligent terminal, which belong to the technical field of virtual reality, wherein the screen rendering method comprises the steps of obtaining current position information of an entity camera; creating a first virtual camera and a second virtual camera, so that the position information of the first virtual camera and the second virtual camera is consistent with the current position of the entity camera; judging whether the angle of view of the virtual camera when the virtual camera shoots the complete screen area is smaller than a preset angle of view threshold value or not at the current position; if yes, executing the rendering of the preset scene according to the shooting parameters of the first virtual camera, if not, splitting the second virtual camera into a plurality of sub virtual cameras, wherein the view angles of the sub virtual cameras respectively cover part of the screen area, executing the rendering of the preset scene according to the shooting parameters of the plurality of sub virtual cameras, and sending the rendering result to the corresponding part of the screen area. The method and the device can ensure the integrity of the screen display scene, thereby improving the viewing sense of the audience.

Description

Screen rendering method, display screen synthesizing method and device and intelligent terminal
Technical Field
The application belongs to the technical field of virtual reality, and particularly relates to a screen rendering method, a display screen synthesizing method and device and an intelligent terminal.
Background
With the development of virtual reality technology, traditional stage art or single movie art forms are increasingly unable to meet the needs of audiences, and virtual stage technology has grown. The method comprises the steps of manufacturing a short film through three-dimensional content manufacturing tool software, rendering the short film to form a three-dimensional scene, wherein a virtual stage comprises a plurality of LED screens which are respectively arranged at the top (or the bottom) and around the stage, and the three-dimensional scene is projected onto the plurality of LED screens to present the final stage effect. The virtual camera with which the final stage effect cooperates carries out picture switching, for example, when the position of the virtual camera is closer to the virtual stage, the resolution of the stage scene to the virtual stage is higher, and when the position of the virtual camera is farther from the virtual stage, the resolution of the stage scene to the virtual stage is lower.
When the visual angle of the virtual camera is biased, only part of the LED screen can be covered, the scene picture displayed by the LED screen is incomplete, and the content on the LED screen is lost at the moment because the resolution of the rendering picture is fixed and the content always follows the lens of the virtual camera, so that bad watching experience is brought to field viewers.
Disclosure of Invention
In order to at least overcome the problem that the content on the LED screen is missing due to the fact that the virtual camera view angle is biased in the traditional LED screen rendering method to a certain extent, the application provides a screen rendering method, a display picture synthesizing method and an intelligent terminal.
In a first aspect, the present application provides a screen rendering method, which is applicable to a rendering device, including:
acquiring current position information of an entity camera;
creating a first virtual camera and a second virtual camera, and enabling position information of the first virtual camera and the second virtual camera to be consistent with the current position of the entity camera;
judging whether the field angle of the first virtual camera and the second virtual camera at the current position is smaller than a preset field angle threshold value when the first virtual camera and the second virtual camera shoot a complete screen area;
if yes, executing the rendering of the preset scene according to the shooting parameters of the first virtual camera, and sending the rendering result to a screen area;
if not, splitting the second virtual camera into a plurality of sub virtual cameras, wherein the view angles of the sub virtual cameras respectively cover part of the screen area, performing the rendering of the preset scene according to the shooting parameters of the plurality of sub virtual cameras, and sending the rendering result to the corresponding part of the screen area.
Further, the determining whether the view angles of the first virtual camera and the second virtual camera when the first virtual camera and the second virtual camera shoot the complete screen area are smaller than a preset view angle threshold value or not includes:
judging whether the field angle of the first virtual camera when the first virtual camera shoots a complete screen area is smaller than a preset field angle threshold value or not;
the method comprises the steps of,
when the angle of view of the first virtual camera when the first virtual camera shoots the complete screen area is not smaller than a preset angle of view threshold, judging whether the angle of view of the second virtual camera when the first virtual camera shoots the complete screen area is smaller than the preset angle of view threshold, wherein the angle of view of the second virtual camera is the minimum angle of view required by the second virtual camera when the second virtual camera shoots the complete screen at the current position.
Further, the method for acquiring the minimum field angle required by the second virtual camera when the complete screen is shot at the current position includes:
constructing a three-dimensional model of the screen;
projecting the three-dimensional model and the second virtual camera to a two-dimensional plane to obtain a stage edge point corresponding to the three-dimensional model and an orientation and a view angle corresponding to the second virtual camera;
and adjusting the direction and the view angle of the second virtual camera on the two-dimensional plane to obtain the minimum view angle of the second virtual camera when all stage edge points are covered.
Further, the view angle of each sub-virtual camera covers a part of the screen area, which includes:
the view angle of each sub-virtual camera is adjusted so that each sub-virtual camera covers a corresponding partial screen area, and each partial screen area does not overlap.
Further, the method for determining the coverage of the corresponding partial screen area by each sub-virtual camera includes:
acquiring a minimum field angle required by each sub-virtual camera when shooting each partial screen area;
judging whether the minimum field angle of the sub-virtual camera when shooting a part of screen area is smaller than a preset field angle threshold value or not;
if yes, determining the partial screen area as the partial screen area corresponding to the sub-virtual camera.
Further, the photographing parameters include at least one of a camera position, a photographing angle, and a field angle.
In a second aspect, the present application provides a display screen synthesis method, where the method is applicable to an intelligent terminal, and the method includes:
acquiring a virtual screen scene shot by a first virtual camera or a second virtual camera;
acquiring a screen shot by an entity camera and an entity role;
fusing the virtual screen scene with the screen;
and synthesizing a screen displaying the virtual screen scene and the entity role to obtain a display picture.
In a third aspect, the present application provides a screen rendering apparatus, including:
the first acquisition module is used for acquiring the current position information of the entity camera;
the system comprises a creation module, a display module and a display module, wherein the creation module is used for creating a first virtual camera and a second virtual camera, so that the position information of the first virtual camera and the second virtual camera is consistent with the current position of the entity camera;
the judging module is used for judging whether the view angles of the first virtual camera and the second virtual camera when the first virtual camera and the second virtual camera shoot the complete screen area are smaller than a preset view angle threshold value or not;
the first rendering module is used for executing the rendering of the preset scene according to the shooting parameters of the first virtual camera when the field angle of the first virtual camera and the second virtual camera when the first virtual camera shoots the complete screen area is smaller than a preset field angle threshold value, and sending the rendering result to the screen area;
and the second rendering module is used for splitting the second virtual camera into a plurality of sub virtual cameras when the angles of view of the first virtual camera and the second virtual camera are not smaller than a preset angle of view threshold when the first virtual camera and the second virtual camera shoot a complete screen area, wherein the angles of view of the sub virtual cameras respectively cover part of the screen area, performing the rendering of a preset scene according to shooting parameters of the plurality of sub virtual cameras, and sending a rendering result to the corresponding part of the screen area.
In a fourth aspect, the present application provides a display screen synthesizing apparatus, including:
the second acquisition module is used for acquiring a virtual screen scene shot by the first virtual camera or the second virtual camera;
the third acquisition module is used for acquiring a screen shot by the entity camera and an entity role;
the fusion module is used for fusing the virtual screen scene with the screen;
and the synthesis module is used for synthesizing the screen displaying the virtual screen scene and the entity role to obtain a display picture.
In a fifth aspect, the present application provides an intelligent terminal, including:
one or more memories having executable programs stored thereon;
one or more processors configured to execute the executable program in the memory to implement the steps of the screen rendering method according to the first aspect and/or the steps of the display composition method according to the second aspect.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
the embodiment of the invention provides a screen rendering method, a display screen synthesizing method device and an intelligent terminal, wherein the screen rendering method comprises the steps of obtaining current position information of an entity camera; creating a first virtual camera and a second virtual camera, so that the position information of the first virtual camera and the second virtual camera is consistent with the current position of the entity camera; judging whether the field angle of the first virtual camera and the second virtual camera at the current position is smaller than a preset field angle threshold value when the first virtual camera and the second virtual camera shoot the complete screen area; if yes, executing rendering of the preset scene according to shooting parameters of the first virtual camera, and sending a rendering result to a screen area; if not, splitting the second virtual camera into a plurality of sub virtual cameras, wherein the view angles of the sub virtual cameras respectively cover part of the screen area, rendering a preset scene according to the shooting parameters of the sub virtual cameras, sending a rendering result to the corresponding part of the screen area, and shooting the part of the screen area respectively by splitting the virtual camera into the plurality of sub virtual cameras when the screen picture is displayed incompletely or in a fuzzy state, so that the integrity of the screen display scene can be ensured, and the viewing sense of a spectator is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a flowchart of a screen rendering method according to an embodiment of the present application.
Fig. 2 is a flowchart of a screen rendering method according to another embodiment of the present application.
Fig. 3 is a flowchart of another screen rendering method according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a minimum angle of view acquisition method according to an embodiment of the present application.
Fig. 5 is a flowchart of another screen rendering method according to an embodiment of the present application.
Fig. 6 is a flowchart of a method for synthesizing a display screen according to an embodiment of the present application.
Fig. 7 is a functional block diagram of a screen rendering apparatus according to an embodiment of the present application.
Fig. 8 is a functional block diagram of a display screen synthesizing apparatus according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, based on the examples herein, which are within the scope of the protection sought by those of ordinary skill in the art without undue effort, are intended to be encompassed by the present application.
Fig. 1 is a flowchart of a screen rendering method according to an embodiment of the present application, and as shown in fig. 1, the screen rendering method is suitable for a rendering device, and includes:
s11: acquiring current position information of an entity camera;
s12: creating a first virtual camera and a second virtual camera, so that the position information of the first virtual camera and the second virtual camera is consistent with the current position of the entity camera;
s13: judging whether the field angle of the first virtual camera and the second virtual camera at the current position is smaller than a preset field angle threshold value when the complete screen area is shot, if so, executing S14, and if not, executing S15;
s14: according to shooting parameters of the first virtual camera, rendering of a preset scene is executed, and a rendering result is sent to a screen area;
s15: splitting the second virtual camera into a plurality of sub virtual cameras, wherein the field angle of each sub virtual camera covers a part of the screen area respectively, performing the rendering of a preset scene according to the shooting parameters of the plurality of sub virtual cameras, and sending the rendering result to the corresponding part of the screen area.
When the traditional LED stage screen effect is presented, when the visual angle of the virtual camera is more deviated, and only part of LED screens can be covered, scene pictures displayed by the LED screens are incomplete, and because the resolution of the rendering pictures is fixed and the content always follows the lens of the virtual camera, the content on the LED screens is lost, so that bad watching experience is brought to field audiences.
In this embodiment, the screen rendering method includes obtaining current position information of the physical camera; creating a first virtual camera and a second virtual camera, so that the position information of the first virtual camera and the second virtual camera is consistent with the current position of the entity camera; judging whether the field angle of the first virtual camera and the second virtual camera at the current position is smaller than a preset field angle threshold value when the first virtual camera and the second virtual camera shoot the complete screen area; if yes, executing rendering of the preset scene according to shooting parameters of the first virtual camera, and sending a rendering result to a screen area; if not, splitting the second virtual camera into a plurality of sub virtual cameras, wherein the view angles of the sub virtual cameras respectively cover part of the screen area, rendering a preset scene according to the shooting parameters of the sub virtual cameras, sending a rendering result to the corresponding part of the screen area, and shooting the part of the screen area respectively by splitting the virtual camera into the plurality of sub virtual cameras when the screen picture is displayed incompletely or in a fuzzy state, so that the integrity of the screen display scene can be ensured, and the viewing sense of a spectator is improved.
Fig. 2 is a flowchart of a screen rendering method according to an embodiment of the present application, as shown in fig. 2, where the screen rendering method includes:
s21: judging whether the angle of view of the first virtual camera when the complete screen area is shot is smaller than a preset angle of view threshold, if so, executing S22, and if not, executing S23;
in this embodiment, the preset angle of view threshold is 180 °, for example
S22: according to shooting parameters of the first virtual camera, rendering of a preset scene is executed, and a rendering result is sent to a screen area;
in the present embodiment, the photographing parameters include at least one of a camera position, a photographing angle, and a field angle. Rendering is performed according to parameters such as camera position, shooting angle and field angle of the virtual camera, so that a screen scene with correct perspective is shot from the real camera, and a complete and clear screen scene can be seen by a live audience.
It should be noted that, the preset scene may be prefabricated according to a stage presentation effect, where the scene is, for example, a natural landscape, a building, etc., so as to present the stage effect that the entity character stands on the virtual scene.
S23: judging whether the field angle of the second virtual camera when the second virtual camera shoots the complete screen area is smaller than a preset field angle threshold value, wherein the field angle of the second virtual camera is the minimum field angle required by the second virtual camera when the second virtual camera shoots the complete screen at the current position, if so, executing S22, otherwise, executing S24;
s24: splitting the second virtual camera into a plurality of sub virtual cameras, wherein the field angle of each sub virtual camera covers a part of the screen area respectively, performing the rendering of a preset scene according to the shooting parameters of the plurality of sub virtual cameras, and sending the rendering result to the corresponding part of the screen area.
It should be noted that when the physical camera is far from the screen and the position of the virtual camera is far, the screen scene may be unclear at this time, or according to the method of splitting the second virtual camera into multiple sub-virtual cameras in the application, each sub-virtual camera only shoots a part of the screen to obtain shooting parameters, performs rendering of the preset scene according to the shooting parameters of the multiple sub-virtual cameras, and sends the rendering result to the corresponding part of the screen area, so as to ensure that the screen picture is clear.
In some embodiments, as shown in fig. 3, the method for obtaining the minimum field angle required when the second virtual camera captures the complete screen at the current position includes:
s31: constructing a three-dimensional model of a screen;
s32: projecting the three-dimensional model and the second virtual camera to a two-dimensional plane to obtain a stage edge point corresponding to the three-dimensional model and an orientation and a view angle corresponding to the second virtual camera;
s33: and adjusting the direction and the view angle of the second virtual camera on the two-dimensional plane to obtain the minimum view angle of the second virtual camera when all stage edge points are covered.
S34: and taking the minimum field angle corresponding to the second virtual camera shooting picture including the whole stage as the minimum field angle required by the first virtual camera when shooting the whole virtual stage.
In this embodiment, the coverage of the angle of view and the stage edge point position are shown in fig. 4. And the second virtual camera is established to look at the virtual stage, and the direction and the view angle of the second virtual camera are adjusted to obtain the minimum view angle, so that the minimum view angle of the photographed screen is judged whether to be smaller than the view angle threshold for keeping the complete and clear pictures, and the integrity and the definition of the screen content are ensured.
In this embodiment, the view angle of each sub-virtual camera covers a part of the screen area, including:
the view angle of each sub-virtual camera is adjusted so that each sub-virtual camera covers a corresponding partial screen area, and each partial screen area does not overlap.
For example, the LED screen on the stage includes a front screen area that the under-stage viewer of the stage can look at, a left side screen area and a right side screen area that are adjacent to the front screen area, and a top screen area and a ground screen area, splitting the second virtual camera into 5 sub-virtual cameras, adjusting the field angle of each sub-virtual camera so that one sub-virtual camera is aligned with the front screen area, one sub-virtual camera is aligned with the left side screen area, one sub-virtual camera is aligned with the right side screen area, one sub-virtual camera is aligned with the top screen area, and one sub-virtual camera is aligned with the ground screen area.
In some embodiments, as shown in fig. 5, a method for determining that each sub-virtual camera covers a corresponding partial screen area includes:
s51: acquiring a minimum field angle required by each sub-virtual camera when shooting each partial screen area;
s52: judging whether the minimum field angle of the sub-virtual camera when shooting the partial screen area is smaller than a preset field angle threshold value, if so, executing S53;
s53: and determining the partial screen area as the partial screen area corresponding to the sub-virtual camera.
For example, if the entity camera goes deep into the stage, only the front screen area and the left side screen area are shot, at this time, the first virtual camera cannot shoot the complete screen area, and other screen areas are left, and no scene is rendered.
Fig. 6 is a flowchart of a display screen synthesizing method according to an embodiment of the present application, as shown in fig. 6, the display screen synthesizing method includes:
s61: acquiring a virtual screen scene shot by a first virtual camera or a second virtual camera;
in this embodiment, the first virtual camera or the second virtual camera is the first virtual camera or the second virtual camera described in the above embodiment.
S62: acquiring a screen shot by an entity camera and an entity role;
s63: fusing the virtual screen scene with the screen;
s64: and synthesizing the screen displaying the virtual screen scene and the entity roles to obtain a display picture.
In this embodiment, the screen scene may be ensured to be complete by the virtual screen scene captured by the first virtual camera or the second virtual camera, and the display screen may be obtained by combining the screen displaying the virtual screen scene and the entity character, so that the terminal user may view the complete and clear display screen of the augmented reality.
Fig. 7 is a functional block diagram of a screen rendering apparatus according to an embodiment of the present application, as shown in fig. 7, including:
a first obtaining module 71, configured to obtain current position information of the entity camera;
a creation module 72 for creating a first virtual camera and a second virtual camera, so that the position information of the first virtual camera and the second virtual camera is consistent with the current position of the entity camera;
a judging module 73, configured to judge whether the angles of view of the first virtual camera and the second virtual camera when the first virtual camera captures a complete screen area are smaller than a preset angle of view threshold;
the first rendering module 74 is configured to perform rendering of a preset scene according to a shooting parameter of the first virtual camera when the angles of view of the first virtual camera and the second virtual camera when the first virtual camera shoots a complete screen area are smaller than a preset angle of view threshold, and send a rendering result to the screen area;
and a second rendering module 75, configured to, when the angles of view of the first virtual camera and the second virtual camera when the complete screen area is photographed are not less than a preset angle of view threshold, split the second virtual camera into a plurality of sub virtual cameras, wherein the angles of view of each sub virtual camera respectively cover a part of the screen area, perform rendering of a preset scene according to photographing parameters of the plurality of sub virtual cameras, and send a rendering result to the corresponding part of the screen area.
In some embodiments, further comprising:
a construction module 76 for constructing a three-dimensional model of the screen;
the projection module 77 is configured to project the three-dimensional model and the second virtual camera onto a two-dimensional plane, so as to obtain a stage edge point corresponding to the three-dimensional model and an orientation and a view angle corresponding to the second virtual camera;
the adjusting module 78 is configured to adjust the orientation and the angle of view of the second virtual camera on the two-dimensional plane, so as to obtain a minimum angle of view of the second virtual camera when all stage edge points are covered.
In some embodiments, the adjustment module is further to:
the view angle of each sub-virtual camera is adjusted so that each sub-virtual camera covers a corresponding partial screen area, and each partial screen area does not overlap.
In this embodiment, current position information of the entity camera is acquired through a first acquisition module; the creation module is used for creating a first virtual camera and a second virtual camera, so that the position information of the first virtual camera and the second virtual camera is consistent with the current position of the entity camera; the judging module judges whether the view angles of the first virtual camera and the second virtual camera at the current position are smaller than a preset view angle threshold value when the first virtual camera and the second virtual camera shoot the complete screen area; the first rendering module performs rendering of a preset scene according to shooting parameters of the first virtual camera when the field angle of the first virtual camera and the second virtual camera when the first virtual camera and the second virtual camera shoot a complete screen area is smaller than a preset field angle threshold value, and sends a rendering result to the screen area; when the field angle of the first virtual camera and the field angle of the second virtual camera are not smaller than a preset field angle threshold value when the first virtual camera and the second virtual camera shoot a complete screen area, the second virtual camera is split into a plurality of sub virtual cameras, the field angle of each sub virtual camera covers part of the screen area respectively, the rendering of a preset scene is executed according to shooting parameters of the plurality of sub virtual cameras, and a rendering result is sent to the corresponding part of the screen area, so that the integrity of the stage screen scene can be ensured, and the viewing sense of audiences is improved.
Fig. 8 is a functional block diagram of a display screen synthesizing apparatus according to an embodiment of the present application, and as shown in fig. 8, the display screen synthesizing apparatus includes:
a second obtaining module 81, configured to obtain a virtual screen scene captured by the first virtual camera or the second virtual camera;
a third obtaining module 82, configured to obtain a screen and an entity role shot by the entity camera;
a fusion module 83, configured to fuse the virtual screen scene with the screen;
and the synthesizing module 84 is configured to synthesize the screen displaying the virtual screen scene with the entity role to obtain a display picture.
In this embodiment, the second acquisition module acquires the virtual screen scene shot by the first virtual camera or the second virtual camera, the third acquisition module acquires the screen shot by the entity camera and the entity role, the fusion module fuses the virtual screen scene and the screen, and the synthesis module synthesizes the screen displaying the virtual screen scene and the entity role to obtain the display picture, so that the terminal user can view the complete and clear augmented reality display picture.
The intelligent terminal provided by the embodiment of the application comprises:
one or more memories having executable programs stored thereon;
one or more processors configured to execute the executable program in the memory to implement the steps of the screen rendering method as described above and/or the steps of the display screen composition method as described in the above embodiments.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "plurality" means at least two.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.
It should be noted that the present invention is not limited to the above-mentioned preferred embodiments, and those skilled in the art can obtain other products in various forms without departing from the scope of the present invention, however, any changes in shape or structure of the present invention, and all technical solutions that are the same or similar to the present application, fall within the scope of the present invention.

Claims (9)

1. A screen rendering method, the method being suitable for a rendering device, comprising:
acquiring current position information of an entity camera;
creating a first virtual camera and a second virtual camera, and enabling position information of the first virtual camera and the second virtual camera to be consistent with the current position of the entity camera;
judging whether the field angle of the first virtual camera when the first virtual camera shoots a complete screen area is smaller than a preset field angle threshold value or not;
if the angle of view of the first virtual camera when the first virtual camera shoots the complete screen area is smaller than a preset angle of view threshold, executing rendering of a preset scene according to shooting parameters of the first virtual camera, and sending a rendering result to the screen area;
when the angle of view of the first virtual camera when the first virtual camera shoots the complete screen area is not smaller than a preset angle of view threshold, judging whether the angle of view of the second virtual camera when the first virtual camera shoots the complete screen area is smaller than the preset angle of view threshold, wherein the angle of view of the second virtual camera is the minimum angle of view required by the second virtual camera when the second virtual camera shoots the complete screen at the current position;
if the field angle of the second virtual camera when the second virtual camera shoots the whole screen area is not smaller than a preset field angle threshold value, splitting the second virtual camera into a plurality of sub virtual cameras, wherein the field angle of each sub virtual camera covers part of the screen area respectively, performing rendering of a preset scene according to shooting parameters of the plurality of sub virtual cameras, and sending a rendering result to the corresponding part of the screen area.
2. The screen rendering method according to claim 1, wherein the second virtual camera is in a method of acquiring a minimum angle of view required when a complete screen is photographed at a current position, comprising:
constructing a three-dimensional model of the screen;
projecting the three-dimensional model and the second virtual camera to a two-dimensional plane to obtain a stage edge point corresponding to the three-dimensional model and an orientation and a view angle corresponding to the second virtual camera;
and adjusting the direction and the view angle of the second virtual camera on the two-dimensional plane to obtain the minimum view angle of the second virtual camera when all stage edge points are covered.
3. The screen rendering method according to claim 1, wherein the view angles of each of the sub-virtual cameras respectively cover a partial screen area, comprising:
the view angle of each sub-virtual camera is adjusted so that each sub-virtual camera covers a corresponding partial screen area, and each partial screen area does not overlap.
4. A method of screen rendering according to claim 3, wherein the method of determining that each sub-virtual camera covers a corresponding partial screen area comprises:
acquiring a minimum field angle required by each sub-virtual camera when shooting each partial screen area;
judging whether the minimum field angle of the sub-virtual camera when shooting a part of screen area is smaller than a preset field angle threshold value or not;
if yes, determining the partial screen area as the partial screen area corresponding to the sub-virtual camera.
5. The screen rendering method of claim 1, wherein the photographing parameters include at least one of a camera position, a photographing angle, and a viewing angle.
6. The display screen synthesis method is characterized by being suitable for an intelligent terminal and comprising the following steps of:
acquiring a virtual screen scene shot by a first virtual camera or a second virtual camera; the virtual screen scene is generated according to the screen rendering method of any one of claims 1 to 5;
acquiring a screen shot by an entity camera and an entity role;
fusing the virtual screen scene with the screen;
and synthesizing a screen displaying the virtual screen scene and the entity role to obtain a display picture.
7. A screen rendering apparatus, comprising:
the first acquisition module is used for acquiring the current position information of the entity camera;
the system comprises a creation module, a display module and a display module, wherein the creation module is used for creating a first virtual camera and a second virtual camera, so that the position information of the first virtual camera and the second virtual camera is consistent with the current position of the entity camera;
the judging module is used for judging whether the field angle of the first virtual camera when the first virtual camera shoots the complete screen area is smaller than a preset field angle threshold value or not;
the first rendering module is used for executing the rendering of the preset scene according to the shooting parameters of the first virtual camera when the field angle of the first virtual camera when the first virtual camera shoots the complete screen area is smaller than the preset field angle threshold value, and sending the rendering result to the screen area;
the second rendering module is used for judging whether the field angle of the second virtual camera when the first virtual camera shoots the complete screen area is smaller than a preset field angle threshold value or not when the field angle of the first virtual camera when the first virtual camera shoots the complete screen area is not smaller than the preset field angle threshold value, and the field angle of the second virtual camera is the minimum field angle required by the second virtual camera when the second virtual camera shoots the complete screen at the current position; when the field angle of the second virtual camera is not smaller than a preset field angle threshold value when the second virtual camera shoots a complete screen area, splitting the second virtual camera into a plurality of sub virtual cameras, wherein the field angle of each sub virtual camera covers a part of the screen area respectively, performing rendering of a preset scene according to shooting parameters of the plurality of sub virtual cameras, and sending a rendering result to the corresponding part of the screen area.
8. A display screen synthesizing apparatus, comprising:
the second acquisition module is used for acquiring a virtual screen scene shot by the first virtual camera or the second virtual camera; the virtual screen scene being generated by the screen rendering apparatus of claim 7;
the third acquisition module is used for acquiring a screen shot by the entity camera and an entity role;
the fusion module is used for fusing the virtual screen scene with the screen;
and the synthesis module is used for synthesizing the screen displaying the virtual screen scene and the entity role to obtain a display picture.
9. An intelligent terminal, characterized by comprising:
one or more memories having executable programs stored thereon;
one or more processors configured to execute the executable program in the memory to implement the steps of the screen rendering method according to any one of claims 1 to 5 and/or the steps of the display composition method according to claim 6.
CN202210466757.4A 2022-04-29 2022-04-29 Screen rendering method, display screen synthesizing method and device and intelligent terminal Active CN114845147B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210466757.4A CN114845147B (en) 2022-04-29 2022-04-29 Screen rendering method, display screen synthesizing method and device and intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210466757.4A CN114845147B (en) 2022-04-29 2022-04-29 Screen rendering method, display screen synthesizing method and device and intelligent terminal

Publications (2)

Publication Number Publication Date
CN114845147A CN114845147A (en) 2022-08-02
CN114845147B true CN114845147B (en) 2024-01-16

Family

ID=82568336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210466757.4A Active CN114845147B (en) 2022-04-29 2022-04-29 Screen rendering method, display screen synthesizing method and device and intelligent terminal

Country Status (1)

Country Link
CN (1) CN114845147B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012508B (en) * 2023-03-28 2023-06-23 高德软件有限公司 Lane line rendering method, device and storage medium
CN116260956B (en) * 2023-05-15 2023-07-18 四川中绳矩阵技术发展有限公司 Virtual reality shooting method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019041351A1 (en) * 2017-09-04 2019-03-07 艾迪普(北京)文化科技股份有限公司 Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene
JP2019145059A (en) * 2018-02-22 2019-08-29 大日本印刷株式会社 Information processing unit, information processing system, information processing method and program
WO2020140720A1 (en) * 2019-01-02 2020-07-09 京东方科技集团股份有限公司 Rendering method and apparatus for virtual reality scene, and device
WO2020215789A1 (en) * 2019-04-26 2020-10-29 北京字节跳动网络技术有限公司 Virtual paintbrush implementing method and apparatus, and computer readable storage medium
CN112019924A (en) * 2020-09-07 2020-12-01 中图云创智能科技(北京)有限公司 Method for setting FOV of panoramic player
CN112040092A (en) * 2020-09-08 2020-12-04 杭州时光坐标影视传媒股份有限公司 Real-time virtual scene LED shooting system and method
CN112330736A (en) * 2020-11-02 2021-02-05 北京虚拟动点科技有限公司 Scene picture shooting method and device, electronic equipment and storage medium
CN113709543A (en) * 2021-02-26 2021-11-26 腾讯科技(深圳)有限公司 Video processing method and device based on virtual reality, electronic equipment and medium
CN113923377A (en) * 2021-10-11 2022-01-11 浙江博采传媒有限公司 Virtual film-making system of LED (light emitting diode) circular screen

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10497182B2 (en) * 2017-10-03 2019-12-03 Blueprint Reality Inc. Mixed reality cinematography using remote activity stations
US10845971B2 (en) * 2018-03-15 2020-11-24 International Business Machines Corporation Generating display regions in a display screen for multi-directional viewing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019041351A1 (en) * 2017-09-04 2019-03-07 艾迪普(北京)文化科技股份有限公司 Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene
JP2019145059A (en) * 2018-02-22 2019-08-29 大日本印刷株式会社 Information processing unit, information processing system, information processing method and program
WO2020140720A1 (en) * 2019-01-02 2020-07-09 京东方科技集团股份有限公司 Rendering method and apparatus for virtual reality scene, and device
WO2020215789A1 (en) * 2019-04-26 2020-10-29 北京字节跳动网络技术有限公司 Virtual paintbrush implementing method and apparatus, and computer readable storage medium
CN112019924A (en) * 2020-09-07 2020-12-01 中图云创智能科技(北京)有限公司 Method for setting FOV of panoramic player
CN112040092A (en) * 2020-09-08 2020-12-04 杭州时光坐标影视传媒股份有限公司 Real-time virtual scene LED shooting system and method
CN112330736A (en) * 2020-11-02 2021-02-05 北京虚拟动点科技有限公司 Scene picture shooting method and device, electronic equipment and storage medium
CN113709543A (en) * 2021-02-26 2021-11-26 腾讯科技(深圳)有限公司 Video processing method and device based on virtual reality, electronic equipment and medium
CN113923377A (en) * 2021-10-11 2022-01-11 浙江博采传媒有限公司 Virtual film-making system of LED (light emitting diode) circular screen

Also Published As

Publication number Publication date
CN114845147A (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN111698390B (en) Virtual camera control method and device, and virtual studio implementation method and system
CN114845147B (en) Screen rendering method, display screen synthesizing method and device and intelligent terminal
US10154194B2 (en) Video capturing and formatting system
US10121284B2 (en) Virtual camera control using motion control systems for augmented three dimensional reality
Matsuyama et al. 3D video and its applications
US20120013711A1 (en) Method and system for creating three-dimensional viewable video from a single video stream
KR20180111798A (en) Adaptive stitching of frames in the panorama frame creation process
US11514654B1 (en) Calibrating focus/defocus operations of a virtual display based on camera settings
AU2005331138A1 (en) 3D image generation and display system
CN106296589B (en) Panoramic image processing method and device
US20070247518A1 (en) System and method for video processing and display
US11335039B2 (en) Correlation of multiple-source image data
JP2009010915A (en) Video display method and video system
CN2667827Y (en) Quasi-panorama surrounded visual reproducing system
US11615755B1 (en) Increasing resolution and luminance of a display
US20070122029A1 (en) System and method for capturing visual data and non-visual data for multi-dimensional image display
KR20080034419A (en) 3d image generation and display system
CN116320363B (en) Multi-angle virtual reality shooting method and system
Cho et al. A 3D model-based multi-camera monitoring system for panoramic video
JP4668500B2 (en) Method and apparatus for inserting images into video in real time
JP7224894B2 (en) Information processing device, information processing method and program
JP4903358B2 (en) Imaging apparatus and method
CN116506563A (en) Virtual scene rendering method and device, electronic equipment and storage medium
JP5457668B2 (en) Video display method and video system
JP4099013B2 (en) Virtual studio video generation apparatus and method and program thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant