CN117475069A - VR-based stereoscopic rendering optimization method and device - Google Patents

VR-based stereoscopic rendering optimization method and device Download PDF

Info

Publication number
CN117475069A
CN117475069A CN202311460150.6A CN202311460150A CN117475069A CN 117475069 A CN117475069 A CN 117475069A CN 202311460150 A CN202311460150 A CN 202311460150A CN 117475069 A CN117475069 A CN 117475069A
Authority
CN
China
Prior art keywords
area
camera
coloring
rendered
illumination information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311460150.6A
Other languages
Chinese (zh)
Inventor
郭少涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Weiling Times Technology Co Ltd
Original Assignee
Beijing Weiling Times Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Weiling Times Technology Co Ltd filed Critical Beijing Weiling Times Technology Co Ltd
Priority to CN202311460150.6A priority Critical patent/CN117475069A/en
Publication of CN117475069A publication Critical patent/CN117475069A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a VR-based stereoscopic rendering optimization method and device, and relates to the technical field of image processing. Comprising the following steps: respectively acquiring an area to be rendered based on the left camera and the right camera, and partitioning the area to be rendered; calculating the geometry covered by the partitioned area to be rendered, and processing the geometry to obtain a processed area to be rendered; and calculating the coloring and illumination information of the processed region to be rendered through the left camera and the right camera to obtain a three-dimensional rendering result. The invention can effectively reduce the logic of repeated rendering while ensuring the rendering effect, effectively reduce the rendering consumption and improve the rendering efficiency.

Description

VR-based stereoscopic rendering optimization method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a VR-based stereoscopic rendering optimization method and device.
Background
In the existing field of computer graphics rendering, especially in VR (Virtual Reality) applications, stereoscopic rendering (Stereo Render) is a very important technology. In a general game rendering Engine, for example, a very well-known UE (virtual Engine), it implements stereoscopic rendering by generating two cameras with a positional offset. This shift is typically an IPD (Interpupillary Distance ) that simulates the human eye, so that two images at different angles can be rendered from two slightly different viewpoints, further providing an immersive stereoscopic effect to the user through a special display device, such as a VR helmet, etc.
However, this method of stereoscopic rendering has a problem. During rendering, whether occlusion culling (Occlusion Culling), geometry conversion (Geometry Transform), vertex Shading (Vertex Shading) or Pixel Shading (Pixel Shading), all rendering pipelines need to be performed completely twice, i.e. once for a left eye view and once for a right eye view. This means that there are a large number of cases where the common area is repeatedly calculated during the rendering process. The repeated calculation consumes a great deal of calculation resources, reduces efficiency, and can cause overheating of equipment to influence the experience of users. Therefore, how to ensure the three-dimensional rendering effect, simultaneously reduce repeated calculation and improve the rendering efficiency becomes the point of optimization in the current virtual reality technology.
Disclosure of Invention
The present invention has been made in view of the problem of how to reduce the area to be repeatedly calculated as much as possible, thereby improving the rendering efficiency.
In order to solve the technical problems, the invention provides the following technical scheme:
in one aspect, the invention provides a VR-based stereoscopic rendering optimization method, which is implemented by electronic equipment, and includes:
s1, respectively acquiring areas to be rendered based on a left camera and a right camera, and partitioning the areas to be rendered.
S2, calculating the geometry covered by the region to be rendered after partitioning, and processing the geometry to obtain the processed region with rendering.
And S3, calculating the coloring and illumination information of the processed region to be rendered through the left camera and the right camera to obtain a three-dimensional rendering result.
Optionally, the partitioned area to be rendered in S2 includes: the area photographed by the left camera alone, the intersection area common to the left and right cameras, and the area photographed by the right camera alone.
Optionally, processing the geometry in S2 includes:
the geometry is culled and geometrically calculated.
Optionally, the geometric calculation includes:
the geometry is converted into view angles of the left and right cameras.
Optionally, in S3, the calculating, by using the left camera and the right camera, the coloring and illumination information of the processed area to be rendered, to obtain a stereoscopic rendering result includes:
and S31, calculating coloring and illumination information of the area shot by the left camera after processing.
S32, calculating coloring and illumination information of the area shot by the right camera after processing.
S33, the left camera and the right camera respectively calculate the coloring and illumination information of the intersection area common to the left camera and the right camera after processing.
And S34, obtaining a three-dimensional rendering result according to the coloring and illumination information of the area shot by the left camera alone, the coloring and illumination information shot by the right camera alone and the coloring and illumination of the intersection area common to the left camera and the right camera.
Optionally, the calculating, by the left camera in S31, the coloring and illumination information of the area photographed by the processed left camera alone includes:
and calculating the coloring and illumination information of the area shot by the left camera after processing according to the position and the visual angle of the left camera, the nature of the geometry and the illumination information of the geometry.
On the other hand, the invention provides a VR-based stereoscopic rendering optimization device, which is applied to realizing a VR-based stereoscopic rendering optimization method, and comprises the following steps:
the acquisition module is used for respectively acquiring the area to be rendered based on the left camera and the right camera, and partitioning the area to be rendered.
And the calculation module is used for calculating the geometric body covered by the partitioned area to be rendered and processing the geometric body to obtain the processed area to be rendered.
And the output module is used for calculating the coloring and illumination information of the processed area to be rendered through the left camera and the right camera to obtain a three-dimensional rendering result.
Optionally, the partitioned area to be rendered includes: the area photographed by the left camera alone, the intersection area common to the left and right cameras, and the area photographed by the right camera alone.
Optionally, the computing module is further configured to:
the geometry is culled and geometrically calculated.
Optionally, the geometric calculation includes:
the geometry is converted into view angles of the left and right cameras.
Optionally, the output module is further configured to:
and S31, calculating coloring and illumination information of the area shot by the left camera after processing.
S32, calculating coloring and illumination information of the area shot by the right camera after processing.
S33, the left camera and the right camera respectively calculate the coloring and illumination information of the intersection area common to the left camera and the right camera after processing.
And S34, obtaining a three-dimensional rendering result according to the coloring and illumination information of the area shot by the left camera alone, the coloring and illumination information shot by the right camera alone and the coloring and illumination of the intersection area common to the left camera and the right camera.
Optionally, the output module is further configured to:
and calculating the coloring and illumination information of the area shot by the left camera after processing according to the position and the visual angle of the left camera, the nature of the geometry and the illumination information of the geometry.
In one aspect, an electronic device is provided, the electronic device including a processor and a memory, the memory storing at least one instruction, the at least one instruction loaded and executed by the processor to implement the VR-based stereoscopic rendering optimization method described above.
In one aspect, a computer-readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the VR-based stereoscopic rendering optimization method described above is provided.
Compared with the prior art, the technical scheme has at least the following beneficial effects:
according to the scheme, the three-dimensional rendering process is optimized, the whole rendering area is divided into three parts, namely, the area photographed by the left camera alone, the intersection area common to the left camera and the right camera alone and the area photographed by the right camera alone, and the three parts are processed respectively. The method fully considers the common part of the left and right camera views, avoids a large number of repeated calculations, and improves the rendering efficiency.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow diagram of a VR-based stereoscopic rendering optimization method according to an embodiment of the present invention;
fig. 2 is a block diagram of a VR-based stereoscopic rendering optimization apparatus provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present invention. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without creative efforts, based on the described embodiments of the present invention fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a VR-based stereoscopic rendering optimization method, which may be implemented by an electronic device. The VR-based stereoscopic rendering optimization method flowchart shown in fig. 1, the process flow of the method may include the following steps:
s1, respectively acquiring areas to be rendered based on a left camera and a right camera, and partitioning the areas to be rendered.
The partitioned area to be rendered may include: the area photographed by the left camera alone, the intersection area common to the left and right cameras, and the area photographed by the right camera alone.
The design concept of the division mode is that the invention hopes to reduce the areas needing to be repeatedly calculated as much as possible, thereby improving the rendering efficiency.
S2, calculating the geometry covered by the region to be rendered after partitioning, and processing the geometry to obtain the processed region with rendering.
In a possible embodiment, the invention requires the calculation of the geometry covered by these three areas. The purpose of this step is to find out those objects or scene elements that need to be rendered.
Optionally, processing the geometry in S2 includes:
the geometry is culled and geometrically calculated.
In one possible embodiment, the present invention performs culling and geometric calculations on these geometries. The purpose of culling is to remove portions that are not seen in the final rendering result, e.g., portions that are occluded by other objects, which may further reduce unnecessary computation.
The geometric calculations are then the transformations of these geometries into the camera's view angle for subsequent rendering and illumination calculations.
And S3, calculating the coloring and illumination information of the processed region to be rendered through the left camera and the right camera to obtain a three-dimensional rendering result.
Optionally, the step S3 may include the following steps S31 to S34:
and S31, calculating coloring and illumination information of the area shot by the left camera after processing.
Alternatively, the step S31 may be:
and calculating the coloring and illumination information of the area shot by the left camera after processing according to the position and the visual angle of the left camera, the nature of the geometry and the illumination information of the geometry.
S32, calculating coloring and illumination information of the area shot by the right camera after processing.
S33, the left camera and the right camera respectively calculate the coloring and illumination information of the intersection area common to the left camera and the right camera after processing.
And S34, obtaining a three-dimensional rendering result according to the coloring and illumination information of the area shot by the left camera alone, the coloring and illumination information shot by the right camera alone and the coloring and illumination of the intersection area common to the left camera and the right camera.
In a possible embodiment, the left and right cameras would calculate the coloring and illumination information of the respective area and the common area, respectively.
In this step, the left and right cameras calculate the color of each pixel based on their own position and viewing angle, and the nature and illumination of the geometry.
It should be noted that for a common area, although the geometry and culling information thereof need only be calculated once, the calculation of this part still needs to be done separately, since the calculation of the coloring and illumination depends on the position and viewing angle of the camera.
In the embodiment of the invention, the three-dimensional rendering flow is optimized, the whole rendering area is divided into three parts, namely, an area independently shot by a left camera, an intersection area common to the left camera and the right camera and an area independently shot by the right camera, and the three parts are respectively processed. The method fully considers the common part of the left and right camera views, avoids a large number of repeated calculations, and improves the rendering efficiency.
As shown in fig. 2, an embodiment of the present invention provides a VR-based stereoscopic rendering optimization apparatus 200, where the apparatus 200 is applied to implement a VR-based stereoscopic rendering optimization method, and the apparatus 200 includes:
the obtaining module 210 is configured to obtain the region to be rendered based on the left camera and the right camera, and partition the region to be rendered.
The calculating module 220 is configured to calculate a geometry covered by the partitioned area to be rendered, and process the geometry to obtain a processed area to be rendered.
And the output module 230 is configured to calculate, through the left camera and the right camera, the color and illumination information of the processed region to be rendered, and obtain a stereoscopic rendering result.
Optionally, the partitioned area to be rendered includes: the area photographed by the left camera alone, the intersection area common to the left and right cameras, and the area photographed by the right camera alone.
Optionally, the computing module 220 is further configured to:
the geometry is culled and geometrically calculated.
Optionally, the geometric calculation includes:
the geometry is converted into view angles of the left and right cameras.
Optionally, the output module 230 is further configured to:
and S31, calculating coloring and illumination information of the area shot by the left camera after processing.
S32, calculating coloring and illumination information of the area shot by the right camera after processing.
S33, the left camera and the right camera respectively calculate the coloring and illumination information of the intersection area common to the left camera and the right camera after processing.
And S34, obtaining a three-dimensional rendering result according to the coloring and illumination information of the area shot by the left camera alone, the coloring and illumination information shot by the right camera alone and the coloring and illumination of the intersection area common to the left camera and the right camera.
Optionally, the output module 230 is further configured to:
and calculating the coloring and illumination information of the area shot by the left camera after processing according to the position and the visual angle of the left camera, the nature of the geometry and the illumination information of the geometry.
In the embodiment of the invention, the three-dimensional rendering flow is optimized, the whole rendering area is divided into three parts, namely, an area independently shot by a left camera, an intersection area common to the left camera and the right camera and an area independently shot by the right camera, and the three parts are respectively processed. The method fully considers the common part of the left and right camera views, avoids a large number of repeated calculations, and improves the rendering efficiency.
Fig. 3 is a schematic structural diagram of an electronic device 300 according to an embodiment of the present invention, where the electronic device 300 may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 301 and one or more memories 302, where at least one instruction is stored in the memories 302, and the at least one instruction is loaded and executed by the processors 301 to implement the following VR-based stereoscopic rendering optimization method:
s1, respectively acquiring areas to be rendered based on a left camera and a right camera, and partitioning the areas to be rendered.
S2, calculating the geometry covered by the region to be rendered after partitioning, and processing the geometry to obtain the processed region with rendering.
And S3, calculating the coloring and illumination information of the processed region to be rendered through the left camera and the right camera to obtain a three-dimensional rendering result.
In an exemplary embodiment, a computer readable storage medium, such as a memory comprising instructions executable by a processor in a terminal to perform the VR based stereoscopic rendering optimization method described above, is also provided. For example, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A VR-based stereoscopic rendering optimization method, the method comprising:
s1, respectively acquiring a region to be rendered based on a left camera and a right camera, and partitioning the region to be rendered;
s2, calculating a geometric body covered by the partitioned area to be rendered, and processing the geometric body to obtain a processed area to be rendered;
and S3, calculating the coloring and illumination information of the processed area to be rendered through the left camera and the right camera to obtain a three-dimensional rendering result.
2. The method according to claim 1, wherein the partitioned area to be rendered in S2 comprises: the area photographed by the left camera alone, the intersection area common to the left and right cameras, and the area photographed by the right camera alone.
3. The method of claim 1, wherein the processing of the geometry in S2 comprises:
and performing elimination and geometric calculation on the geometric body.
4. A method according to claim 3, wherein the geometric calculation comprises:
the geometry is converted into view angles of the left and right cameras.
5. The method according to claim 1, wherein the calculating, by the left camera and the right camera, the coloring and illumination information of the processed area to be rendered in S3, to obtain a stereoscopic rendering result includes:
s31, calculating coloring and illumination information of the area shot by the left camera after processing;
s32, calculating coloring and illumination information of the area shot by the right camera after processing;
s33, respectively calculating coloring and illumination information of the intersection area of the processed left and right cameras through the left camera and the right camera;
and S34, obtaining a three-dimensional rendering result according to the coloring and illumination information of the area shot by the left camera alone, the coloring and illumination information shot by the right camera alone and the coloring and illumination of the intersection area common to the left camera and the right camera.
6. The method according to claim 5, wherein the calculating, by the left camera in S31, the coloring and illumination information of the area photographed by the processed left camera alone includes:
and calculating the coloring and illumination information of the area shot by the left camera after processing according to the position and the visual angle of the left camera, the nature of the geometry and the illumination information of the geometry.
7. A VR-based stereoscopic rendering optimization apparatus, the apparatus comprising:
the acquisition module is used for respectively acquiring the region to be rendered based on the left camera and the right camera and partitioning the region to be rendered;
the computing module is used for computing the geometry covered by the partitioned area to be rendered, and processing the geometry to obtain a processed area to be rendered;
and the output module is used for calculating the coloring and illumination information of the processed area to be rendered through the left camera and the right camera to obtain a three-dimensional rendering result.
8. The apparatus of claim 7, wherein the partitioned region to be rendered comprises: the area photographed by the left camera alone, the intersection area common to the left and right cameras, and the area photographed by the right camera alone.
9. The apparatus of claim 7, wherein the output module is configured to:
s31, calculating coloring and illumination information of the area shot by the left camera after processing;
s32, calculating coloring and illumination information of the area shot by the right camera after processing;
s33, respectively calculating coloring and illumination information of the intersection area of the processed left and right cameras through the left camera and the right camera;
and S34, obtaining a three-dimensional rendering result according to the coloring and illumination information of the area shot by the left camera alone, the coloring and illumination information shot by the right camera alone and the coloring and illumination of the intersection area common to the left camera and the right camera.
10. The apparatus of claim 9, wherein the output module is configured to:
and calculating the coloring and illumination information of the area shot by the left camera after processing according to the position and the visual angle of the left camera, the nature of the geometry and the illumination information of the geometry.
CN202311460150.6A 2023-11-03 2023-11-03 VR-based stereoscopic rendering optimization method and device Pending CN117475069A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311460150.6A CN117475069A (en) 2023-11-03 2023-11-03 VR-based stereoscopic rendering optimization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311460150.6A CN117475069A (en) 2023-11-03 2023-11-03 VR-based stereoscopic rendering optimization method and device

Publications (1)

Publication Number Publication Date
CN117475069A true CN117475069A (en) 2024-01-30

Family

ID=89625235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311460150.6A Pending CN117475069A (en) 2023-11-03 2023-11-03 VR-based stereoscopic rendering optimization method and device

Country Status (1)

Country Link
CN (1) CN117475069A (en)

Similar Documents

Publication Publication Date Title
KR101851180B1 (en) Morphological anti-aliasing (mlaa) of a re-projection of a two-dimensional image
CN107798704B (en) Real-time image superposition method and device for augmented reality
CN110246146A (en) Full parallax light field content generating method and device based on multiple deep image rendering
EP2410492A2 (en) Optimal point density using camera proximity for point-based global illumination
WO2011120228A1 (en) A multi-core processor supporting real-time 3d image rendering on an autostereoscopic display
IL256459A (en) Fast rendering of quadrics and marking of silhouettes thereof
US11417060B2 (en) Stereoscopic rendering of virtual 3D objects
CN111275801A (en) Three-dimensional picture rendering method and device
US9165393B1 (en) Measuring stereoscopic quality in a three-dimensional computer-generated scene
CN110782507A (en) Texture mapping generation method and system based on face mesh model and electronic equipment
US8619094B2 (en) Morphological anti-aliasing (MLAA) of a re-projection of a two-dimensional image
CN114863014A (en) Fusion display method and device for three-dimensional model
US20180213215A1 (en) Method and device for displaying a three-dimensional scene on display surface having an arbitrary non-planar shape
CN114463408A (en) Free viewpoint image generation method, device, equipment and storage medium
CN107798703B (en) Real-time image superposition method and device for augmented reality
US11288774B2 (en) Image processing method and apparatus, storage medium, and electronic apparatus
CN110838167B (en) Model rendering method, device and storage medium
CN109816765B (en) Method, device, equipment and medium for determining textures of dynamic scene in real time
CN113223137B (en) Generation method and device of perspective projection human face point cloud image and electronic equipment
CN117475069A (en) VR-based stereoscopic rendering optimization method and device
CN115830202A (en) Three-dimensional model rendering method and device
CN113592990A (en) Three-dimensional effect generation method, device, equipment and medium for two-dimensional image
EP4258221A2 (en) Image processing apparatus, image processing method, and program
CN115984458B (en) Method, system and controller for extracting target object model based on radiation field
CN117459694A (en) Image generation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination