CN113470153A - Rendering method and device of virtual scene and electronic equipment - Google Patents

Rendering method and device of virtual scene and electronic equipment Download PDF

Info

Publication number
CN113470153A
CN113470153A CN202110836252.8A CN202110836252A CN113470153A CN 113470153 A CN113470153 A CN 113470153A CN 202110836252 A CN202110836252 A CN 202110836252A CN 113470153 A CN113470153 A CN 113470153A
Authority
CN
China
Prior art keywords
rendering
image
virtual scene
color
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110836252.8A
Other languages
Chinese (zh)
Other versions
CN113470153B (en
Inventor
熊亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110836252.8A priority Critical patent/CN113470153B/en
Publication of CN113470153A publication Critical patent/CN113470153A/en
Application granted granted Critical
Publication of CN113470153B publication Critical patent/CN113470153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The application provides a rendering method and device of a virtual scene, electronic equipment and a computer readable storage medium; the method comprises the following steps: creating a color buffer area with a rendering size smaller than that of the screen buffer area; rendering a transparent object in a virtual scene to a color buffer area to obtain a first image; rendering the first image in the color buffer area to a screen buffer area, and performing up-sampling processing in the rendering process to obtain a second image; rendering the second image in the screen buffer to the screen to display the second image in the screen. By the method and the device, rendering time can be reduced, and computing pressure during rendering is reduced.

Description

Rendering method and device of virtual scene and electronic equipment
Technical Field
The present application relates to computer technologies, and in particular, to a method and an apparatus for rendering a virtual scene, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of computer technology, the Virtual modeling technology is widely applied to the fields of game making, animation making, Virtual Reality (VR) and the like, and a Virtual scene different from the real world can be displayed on a screen through the Virtual modeling technology, so that scene display with stereoscopic impression and sense of Reality is realized.
In the related art, all objects in a virtual scene are rendered indiscriminately into a screen buffer and rendered to a screen through the screen buffer for display. However, the rendering size of the screen buffer is usually the same as the screen size, which results in too many pixels to be calculated and too long rendering time; meanwhile, under the condition that the virtual scene comprises the transparent object, the transparent object cannot shade or eliminate the object behind, so that one pixel can be drawn for multiple times during rendering, and the calculation pressure is overlarge.
Disclosure of Invention
The embodiment of the application provides a rendering method and device of a virtual scene, electronic equipment and a computer readable storage medium, which can reduce rendering time consumption and reduce computing pressure during rendering.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a rendering method of a virtual scene, which comprises the following steps:
creating a color buffer area with a rendering size smaller than that of the screen buffer area;
rendering a transparent object in the virtual scene to the color buffer area to obtain a first image;
rendering the first image in the color buffer area to the screen buffer area, and performing up-sampling processing in the rendering process to obtain a second image;
rendering the second image in the screen buffer to a screen to display the second image in the screen.
An embodiment of the present application provides a rendering apparatus for a virtual scene, including:
the creating module is used for creating a color buffer area with a rendering size smaller than that of the screen buffer area;
the first rendering module is used for rendering the transparent object in the virtual scene to the color buffer area to obtain a first image;
the second rendering module is used for rendering the first image in the color buffer area to the screen buffer area and performing up-sampling processing in the rendering process to obtain a second image;
and the screen rendering module is used for rendering the second image in the screen buffer area to a screen so as to display the second image in the screen.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the rendering method of the virtual scene provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium to implement the method for rendering a virtual scene provided in the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
and rendering the transparent objects in the virtual scene to a color buffer area, and rendering the transparent objects to a screen buffer area by combining an up-sampling mode, and finally realizing the rendering from the screen buffer area to the screen. Because the rendering size of the color buffer area is smaller than that of the screen buffer area, the number of pixels needing to be calculated (drawn) can be reduced, namely the rendering pressure and the rendering time consumption can be reduced; meanwhile, the transparent object has small influence on the picture effect, so that the display effect of the virtual scene can be ensured to a certain extent.
Drawings
Fig. 1 is an architecture diagram of a rendering system of a virtual scene provided in an embodiment of the present application;
fig. 2 is a schematic architecture diagram of a terminal device provided in an embodiment of the present application;
fig. 3 is a schematic architecture diagram of a virtual scene engine provided in an embodiment of the present application;
fig. 4A to fig. 4E are schematic flow diagrams of a rendering method of a virtual scene according to an embodiment of the present application;
FIG. 5 is a schematic illustration of color mixing provided by embodiments of the present application;
FIG. 6 is a flowchart illustrating dynamic turning on/off of a small screen rendering according to an embodiment of the present disclosure;
fig. 7 is a rendering schematic diagram of a virtual scene provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of image comparison provided by embodiments of the present application;
FIG. 9A is a schematic diagram of an image rendered according to a scheme provided in the related art;
FIG. 9B is a schematic diagram of an image rendered according to a scheme provided by an embodiment of the present application;
FIG. 10A is a rendering load diagram in a scenario provided by the related art;
fig. 10B is a rendering load diagram in the scheme provided by the embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein. In the following description, the term "plurality" referred to means at least two.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Virtual scene: by utilizing scenes which are output by electronic equipment and are different from the real world, visual perception of a virtual scene can be formed through naked eyes or assistance of equipment, such as two-dimensional images output through a display screen, and three-dimensional images output through stereoscopic display technologies such as stereoscopic projection, virtual reality and augmented reality technologies; in addition, various real-world-simulated perceptions such as auditory perception, tactile perception, olfactory perception, motion perception and the like can be formed through various possible hardware. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The dimension of the virtual scene is not limited in the embodiment of the application, and for example, the virtual scene may be a three-dimensional virtual scene.
2) In response to: for indicating the condition or state on which the performed operation depends, when the condition or state on which the performed operation depends is satisfied, the performed operation or operations may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) A buffer area: the storage space is located in the memory or the video memory and is used for storing specific data. The embodiment of the application relates to a screen buffer area, a color buffer area and a depth buffer area, wherein the color buffer area and the depth buffer area can belong to a render target (render target), the screen buffer area refers to a buffer area (such as a background buffer) which is used for displaying a screen image and is located in a video memory, and the render size (or resolution) is consistent with the screen size; the RenderTarget is a buffer area located in a memory or a video memory, and the rendering size may not be consistent with the rendering size of the screen buffer area. It should be noted that, for the image (or image data) stored in the buffer, the size of the image is consistent with the rendering size of the buffer.
4) Transparent object: the term "transparent object" refers to an object that cannot be occluded or removed from a virtual scene, for example, a transparent object may refer to an object made of a transparent material.
5) Upsample (Upsample): the method is a mode of amplifying an image in proportion to obtain a new image, and in order to maintain the quality of the image in the amplifying process, an interpolation method can be adopted to fill in the vacant pixels. Downsampling (downsampling) is similar to upsampling, except that downsampling refers to scaling down an image.
6) Pixel (Pixel): refers to elements in the image that are not continuously segmentable.
7) Virtual object: the image of various people and objects that can interact in the virtual scene, or the movable objects in the virtual scene. The movable object may be a virtual character, a virtual animal, an animation character, etc., such as a character, an animal, a plant, an oil drum, a wall, a stone, etc., displayed in a virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
The embodiment of the application provides a rendering method and device for a virtual scene, electronic equipment and a computer-readable storage medium, which can reduce rendering pressure and rendering time on the basis of ensuring a picture effect. An exemplary application of the electronic device provided in the embodiment of the present application is described below, and the electronic device provided in the embodiment of the present application may be implemented as various types of terminal devices, and may also be implemented as a server.
Referring to fig. 1, fig. 1 is an architectural diagram of a rendering system 100 for a virtual scene provided in an embodiment of the present application, and a terminal device 400 is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two.
In some embodiments, taking an electronic device as a terminal device as an example, the rendering method of a virtual scene provided in the embodiments of the present application may be implemented by the terminal device. For example, the terminal device 400 may calculate data required for displaying the virtual scene through a Graphics computing hardware, such as a Graphics Processing Unit (GPU), and perform loading, parsing, and rendering of the display data, and output an image capable of forming visual perception on the virtual scene by means of a Graphics output hardware, such as a screen, for example, displaying the image of the virtual scene on a display screen of a smartphone.
For example, the terminal device 400 may create a color buffer having a rendering size smaller than the screen buffer; rendering a transparent object in a virtual scene to a color buffer area to obtain a first image; rendering the first image in the color buffer area to a screen buffer area, and performing up-sampling processing in the rendering process to obtain a second image; rendering the second image in the screen buffer to the screen to display the second image in the screen.
In some embodiments, taking the electronic device as a server as an example, the rendering method of the virtual scene provided in the embodiments of the present application may also be cooperatively implemented by the server and the terminal device. For example, the server 200 performs calculation of virtual scene-related display data and transmits the same to the terminal device 400, and the terminal device 400 relies on graphics computing hardware to complete loading, parsing and rendering of the display data and relies on graphics output hardware to output images to form visual perception.
In some embodiments, the terminal device 400 or the server 200 may implement the rendering method of the virtual scene provided in the embodiment of the present application by running a computer program, for example, the client 410 shown in fig. 1. For example, the computer program may be a native program or a software module in an operating system; can be a local (Native) Application program (APP), i.e. a program that needs to be installed in an operating system to run, such as a military simulation program, a game Application program, etc.; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; may be an applet that can be embedded into any APP; it may also be a plug-in embedded in the virtual scene engine, where the plug-in may be run or shut down by user control. In general, the computer programs described above may be any form of application, module or plug-in. The game application may be any one of a First-Person shooter (FPS) game, a Third-Person shooter (TPS) game, a Multiplayer Online Battle Arena (MOBA) game, and a Multiplayer gunfight live game, which is not limited to the above.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform, where the cloud service may be a data service for a virtual scene (related data for providing the virtual scene), and is used by the terminal device 400 to call. The terminal device 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart television, a smart watch, and the like. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited.
Taking the electronic device provided in the embodiment of the present application as an example for illustration, it can be understood that, for the case where the electronic device is a server, parts (such as the user interface, the presentation module, and the input processing module) in the structure shown in fig. 2 may be default. Referring to fig. 2, fig. 2 is a schematic structural diagram of a terminal device 400 provided in an embodiment of the present application, where the terminal device 400 shown in fig. 2 includes: at least one processor 410, memory 450, at least one network interface 420, and a user interface 430. The various components in the terminal device 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable communications among the components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 440 in fig. 2.
The Processor 410 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable the presentation of media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 450 optionally includes one or more storage devices physically located remote from processor 410.
The memory 450 includes either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 450 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 451, including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for communicating to other electronic devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 453 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the rendering apparatus for a virtual scene provided in the embodiments of the present application may be implemented in software, and fig. 2 illustrates the rendering apparatus 455 for a virtual scene stored in the memory 450, which may be software in the form of programs and plug-ins, and includes the following software modules: a creation module 4551, a first rendering module 4552, a second rendering module 4553 and a screen rendering module 4554, which are logical and thus may be arbitrarily combined or further divided according to the functions implemented. The functions of the respective modules will be explained below.
Referring to fig. 3, fig. 3 is a schematic diagram of a virtual scene engine provided in an embodiment of the present application, where the virtual scene is a game virtual scene, the virtual scene engine may be a game engine, such as a ghost engine. As shown in fig. 3, the virtual scene engine includes, but is not limited to, a rendering component (e.g., renderer), an editing component (e.g., editor for editing/producing a virtual scene), an underlying algorithm, scene management (for managing a plurality of sub-scenes in a virtual scene), sound effects (for managing audio corresponding to a virtual scene), a script engine, and a camera component. The rendering method of the virtual scene provided in the embodiment of the present application may be implemented by invoking relevant components of the virtual scene engine shown in fig. 3 by respective modules in the rendering apparatus 455 of the virtual scene shown in fig. 2, and is illustrated below.
For example, the creation module 4551 is configured to call a rendering component in the virtual scene engine to create a color buffer with a rendering size smaller than the screen buffer; the first rendering module 4552 is configured to invoke a rendering component to render a transparent object in a virtual scene to a color buffer to obtain a first image; the second rendering module 4553 is configured to invoke a rendering component, so as to render the first image in the color buffer to the screen buffer, and perform upsampling processing in the rendering process to obtain a second image; the screen rendering module 4554 is configured to invoke the rendering component to render the second image in the screen buffer to the screen to display the second image in the screen.
Of course, the above examples do not limit the embodiments of the present application, and the calling relationship of each component included in the virtual scene engine and each module in the rendering device 455 of the virtual scene to the component in the virtual scene engine may be adjusted according to the actual application scene.
The rendering method of the virtual scene provided by the embodiment of the present application will be described in conjunction with exemplary applications and implementations of the electronic device provided by the embodiment of the present application.
Referring to fig. 4A, fig. 4A is a schematic flowchart of a rendering method of a virtual scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 4A.
In step 101, a color buffer is created that renders a color buffer smaller in size than the screen buffer.
For example, a color buffer with a rendering size (or resolution) smaller than the screen buffer may be created for rendering of transparent objects in the virtual scene, i.e. for small screen rendering. The color buffer may be used to store information about the color, transparency, etc. of the pixel.
The rendering size ratio between the screen buffer area and the color buffer area is not limited, and can be set according to practical application scenes, for example, the rendering size of the screen buffer area is as follows: rendering size of color buffer 4: 1.
it should be noted that, the color space for representing colors in the embodiments of the present application is not limited, and may be, for example, an RGB color space, a CMYK color space, or an Lab color space.
In some embodiments, creating a color buffer with a rendering size smaller than the screen buffer as described above may be accomplished by: and creating a color buffer area with a rendering size smaller than that of the screen buffer area in the video memory. For example, a color buffer may be created in Tile memory space of the GPU, where Tile memory space is one high-speed memory space in the GPU. Therefore, the problems of low data transmission efficiency and large power consumption caused by back-and-forth copying of data between the GPU and the memory can be effectively solved, and the computing resource overhead in the rendering process can be reduced.
In step 102, transparent objects in the virtual scene are rendered to a color buffer to obtain a first image.
For example, transparent objects in the virtual scene are rendered to the color buffer in units of pixels, resulting in a first image. It should be noted that the first image may also be referred to as first image data, which includes a plurality of pixels and colors (which may also include transparency levels corresponding to the pixels) corresponding to the pixels, and the following intermediate image and second image are equivalent.
In some embodiments, the above-described rendering of transparent objects in a virtual scene to a color buffer may be accomplished by: when the optimized rendering condition is met, rendering the transparent object in the virtual scene to a color buffer area; wherein the optimized rendering condition comprises at least one of: a sub-scene to be rendered of the virtual scene belongs to an optimized sub-scene; wherein the optimized sub-scenes comprise at least part of the sub-scenes in the virtual scene; the number of transparent objects with optimized rendering parameters in the virtual scene is greater than a number threshold; the scene parameter of the virtual scene is larger than the scene parameter threshold value; wherein the scene parameters include at least one of interaction parameters of the virtual objects, a number of the virtual objects, and device resource usage parameters.
In the embodiment of the application, an optimized rendering condition can be set for small-screen rendering, when the optimized rendering condition is met, the small-screen rendering is determined to be started, and transparent objects in a virtual scene are rendered to a color buffer area; when the optimized rendering condition is not met, the small-screen rendering is determined to be closed, and all objects in the virtual scene are rendered into a non-optimized buffer area (namely, large-screen rendering is performed), wherein the non-optimized buffer area is described in detail later.
The optimized rendering condition may include at least one of:
1) the sub-scene to be rendered currently in the virtual scene belongs to an optimized sub-scene, wherein the virtual scene comprises a plurality of sub-scenes, and the optimized sub-scene comprises at least part of the sub-scenes in the virtual scene. For example, a plurality of copies exist in a game virtual scene, and each copy corresponds to one sub-scene; for another example, there are multiple regions in the game virtual scene, each corresponding to a sub-scene.
2) The number of the transparent objects with optimized rendering parameters in the virtual scene is greater than a number threshold, wherein the rendering parameters of the transparent objects can include an optimized rendering parameter and a non-optimized rendering parameter, the optimized rendering parameter is used for indicating that the transparent objects need to be subjected to small-screen rendering, and the non-optimized rendering parameter is used for indicating that the transparent objects do not need to be subjected to small-screen rendering.
3) The current scene parameter of the virtual scene is greater than the scene parameter threshold, the scene parameter comprises at least one of the interaction parameter of the virtual objects, the number of the virtual objects and the equipment resource use parameter, the larger the scene parameter is, the larger the rendering pressure is, and the scene parameter threshold can be correspondingly set according to the type of the scene parameter. The virtual objects can refer to all the virtual objects, and can refer to a specific certain or several virtual objects, such as a specific virtual character; the interaction parameter may include at least one of a number of interactions and an execution frequency, and the interactions may include at least one of an attack operation and a cooperative operation; the device resource usage parameter may include at least one of a Central Processing Unit (CPU) usage rate and a GPU usage rate of the electronic device.
By the method, when the optimized rendering condition is met, the rendering pressure is proved to be larger, so that small-screen rendering is started, the expenditure of computing resources is reduced, and the rendering efficiency is improved; when the optimized rendering condition is not met, the rendering pressure is proved to be in a bearable range, so that the small screen rendering is closed, and the accuracy of the finally displayed image is improved.
In some embodiments, the virtual scene includes a plurality of sub-scenes, and between any of the steps, further includes: performing at least one of: in response to an optimized rendering configuration operation for at least part of the sub-scenes in the virtual scene, treating at least part of the sub-scenes as optimized sub-scenes; and screening the multiple sub-scenes according to the historical scene parameters respectively corresponding to the multiple sub-scenes to obtain an optimized sub-scene.
The embodiment of the application provides the following two ways to determine the optimization sub-scenario:
1) and in response to the optimized rendering configuration operation aiming at least part of the sub-scenes in the virtual scene, regarding the at least part of the sub-scenes as optimized sub-scenes. For example, personnel associated with the virtual scene (e.g., production personnel, planning personnel, etc.) may configure each sub-scene in the virtual scene individually, i.e., whether the configuration is an optimized sub-scene. By the method, the degree of freedom of determining the optimized sub-scene can be improved, and related personnel can be supported to configure according to actual rendering requirements.
2) And screening the multiple sub-scenes according to the historical scene parameters respectively corresponding to the multiple sub-scenes to obtain an optimized sub-scene. For example, for each sub-scene, scene parameters within a historical period of time are acquired as historical scene parameters. Then, the sub-scene corresponding to the historical scene parameter larger than the scene parameter threshold value can be used as an optimized sub-scene; or, the plurality of sub-scenes may be sorted according to the descending order of the historical scene parameters, and the sub-scenes at TOP K after sorting are used as the optimized sub-scenes, where K is an integer greater than 0.
The two modes can be applied optionally or in combination, so that the flexibility can be improved.
In some embodiments, between any of the steps, further comprising: performing at least one of: updating rendering parameters of at least partially transparent objects in the virtual scene to optimized rendering parameters in response to an optimized rendering configuration operation for the at least partially transparent objects; determining the types of a plurality of transparent objects in the virtual scene, and updating the rendering parameters of the transparent objects meeting the type conditions into optimized rendering parameters.
In this embodiment of the present application, the rendering parameters of the transparent objects in the virtual scene may default to non-optimized rendering parameters, and the updating of the rendering parameters is implemented by at least one of the following two ways:
1) in response to an optimized rendering configuration operation for at least partially transparent objects in the virtual scene, the rendering parameters of the at least partially transparent objects are updated to optimized rendering parameters. In this way, the manual configuration of the transparent objects by the relevant personnel of the virtual scene may be supported, for example, for some unimportant transparent objects, the corresponding rendering parameters may be configured to be optimized rendering parameters. In addition, the rendering parameters of the at least partially transparent object in the virtual scene may also be updated to non-optimized rendering parameters in response to a non-optimized rendering configuration operation for the at least partially transparent object.
2) Determining the types of a plurality of transparent objects in the virtual scene, and updating the rendering parameters of the transparent objects meeting the type conditions into optimized rendering parameters. For example, the type condition may include some types that have less influence on the screen effect, such as a water body (excessive nature), a transparent particle (short duration), and the like, and for transparent objects belonging to these types, the corresponding rendering parameters are automatically updated to the optimized rendering parameters.
In step 103, the first image in the color buffer is rendered to the screen buffer, and upsampling is performed during the rendering process to obtain a second image.
For example, a first image in the color buffer is rendered to the screen buffer, and since the screen buffer and the color buffer correspond to different rendering sizes, an upsampling process can be performed according to a rendering size ratio between the screen buffer and the color buffer in the rendering process to obtain a second image. And the size of the second image is consistent with the rendering size of the screen buffer area.
In step 104, the second image in the screen buffer is rendered to the screen to display the second image in the screen.
For example, the second image in the screen buffer is rendered to a screen (or called user interface) to display the second image in the screen, that is, to realize the display of the transparent object in the virtual scene.
As shown in fig. 4A, in the embodiment of the present application, rendering of a transparent object is implemented in a small-screen rendering manner, and the number of pixels to be calculated can be reduced, so that the calculation pressure during rendering is reduced, and the rendering efficiency is improved; meanwhile, compared with a non-transparent object, the transparent object has smaller influence on the picture effect, so that the quality of the finally displayed image can be ensured to a certain extent.
In some embodiments, referring to fig. 4B, fig. 4B is a flowchart of a rendering method of a virtual scene provided in an embodiment of the present application, and step 102 shown in fig. 4A may be implemented through step 201 to step 203, which will be described with reference to each step.
In step 201, traversing a plurality of transparent objects corresponding to a first pixel in a first image; wherein the first pixel represents any one pixel in the first image.
For example, a blank first image is initialized in the color buffer, and a process of rendering the transparent object to the color buffer will be described by taking any one pixel (hereinafter, named as a first pixel for convenience of description) in the first image as an example. Firstly, a plurality of transparent objects needing to be rendered to a first pixel in a virtual scene are determined to serve as a plurality of transparent objects corresponding to the first pixel, and then traversing processing is carried out on the plurality of transparent objects corresponding to the first pixel.
It should be noted that, a transparent object needs to be rendered to the first pixel means that at least a part of the transparent object needs to be rendered to the first pixel. In the rendered first image, a transparent object may be represented by at least one pixel.
In some embodiments, before step 201, further comprising: sequencing a plurality of transparent objects in the virtual scene according to the transparent object parameters to obtain a rendering sequence; wherein the transparent object parameter comprises at least one of depth, volume, and complexity; the traversing of the plurality of transparent objects corresponding to the first pixel in the first image can be implemented in such a manner that: and traversing a plurality of transparent objects corresponding to the first pixel according to the rendering sequence.
For example, transparent object parameters corresponding to a plurality of transparent objects in the virtual scene may be determined, and the transparent object parameters may include at least one of depth, volume, and complexity. Then, the transparent objects in the virtual scene are sorted according to the transparent object parameters to obtain a rendering order of the transparent objects, where the sorting may be performed according to an order from a large transparent object parameter to a small transparent object parameter, but this does not constitute a limitation to the embodiment of the present application.
After the rendering sequence of the transparent objects is obtained, the plurality of transparent objects corresponding to the first pixels can be traversed according to the rendering sequence, so that ordered rendering can be realized, and effectiveness and accuracy of the rendering process are improved.
In step 202, a color mixing process is performed according to the traversed color of the transparent object, the traversed transparency of the transparent object, the color of the first pixel, and the traversed complementary transparency of the transparent object, so as to obtain a new color of the first pixel.
Here, in the color mixing process, the transparency of the traversed transparent object may be used as the weight of the color of the traversed transparent object, the complementary transparency of the traversed transparent object may be used as the weight of the color of the first pixel, and the color of the traversed transparent object and the color of the first pixel are subjected to weighted summation process to obtain a new color of the first pixel.
Wherein, the sum of the transparency of the traversed transparent object and the complementary transparency of the traversed transparent object is 1, where 1 refers to the maximum value of the transparency.
In step 203, transparency blending processing is performed according to the transparency of the traversed transparent object and the transparency of the first pixel, so as to obtain a new transparency of the first pixel.
For example, the complementary transparency of the traversed transparent object may be multiplied by the transparency of the first pixel to obtain a new transparency of the first pixel.
And when the new color and the new transparency of the first pixel are determined, continuously traversing the next transparent object until all the transparent objects corresponding to the first pixel are traversed, and finishing the rendering work of the first pixel at the moment. For other pixels in the first image, the transparent object can be rendered similarly with reference to the above steps 201 to 203.
It is worth noting that in the embodiment of the present application, color and transparency can be represented by an RGBA color space, wherein RGB is used for representing color and a is used for representing transparency. In this case, the process of rendering the transparent objects in the virtual scene to the color buffer to obtain the first image is substantially a process of updating the values of R, G, B, A corresponding to the four channels of pixels in the first image. When initializing the first image, each pixel in the first image may be initialized to (0, 0, 0, 1), i.e., the values corresponding to R, G, B are all 0, and the value corresponding to a channel is 1.
As shown in fig. 4B, in the embodiment of the present application, a plurality of transparent objects are sequentially rendered based on a color mixing processing mechanism and a transparency mixing processing mechanism, so that the effectiveness and accuracy of a rendering process can be improved.
In some embodiments, referring to fig. 4C, fig. 4C is a flowchart of a rendering method of a virtual scene provided in an embodiment of the present application, and step 103 shown in fig. 4A may be implemented by steps 301 to 302, which will be described with reference to the steps.
In step 301, color mixing processing is performed according to the color of the first pixel in the first image, the transparency of the first pixel, and the color of the intermediate pixel in the intermediate image, so as to obtain a new color of the intermediate pixel; the first pixel represents any pixel in the first image, and the first pixel corresponds to the same pixel position as the middle pixel; the intermediate image is stored in a screen buffer and is used for performing upsampling processing to obtain a second image.
Here, for convenience of explanation, the image stored in the screen buffer and to be subjected to the up-sampling process is named as an intermediate image, wherein the intermediate image corresponds to the same size as the first image.
In the process of rendering the first image in the color buffer to the screen buffer, taking the first pixel in the first image as an example, color mixing processing may be performed according to the color of the first pixel (referring to the finally calculated color), the transparency of the first pixel (referring to the finally calculated transparency), and the color of an intermediate pixel in the intermediate image (as a background color), so as to obtain a new color of the intermediate pixel, where the pixel position of the first pixel in the first image is the same as the pixel position of the intermediate pixel in the intermediate image.
For example, the transparency of the first pixel may be used as the weight of the color of the intermediate pixel, the weight of the color of the first pixel is set to 1 by default, and the color of the first pixel and the color of the intermediate pixel are subjected to weighted summation processing to obtain a new color of the intermediate pixel.
Rendering may also be performed with reference to step 301 for other pixels in the intermediate image.
In step 302, according to the rendering size ratio between the screen buffer area and the color buffer area, performing pixel interpolation processing on an intermediate image obtained based on the rendering of the first image to obtain a second image; and the size of the second image is the same as the rendering size of the screen buffer area.
And when rendering of each pixel in the intermediate image is completed (namely, a new color is obtained), performing pixel interpolation processing on the intermediate image according to the rendering size ratio between the screen buffer area and the color buffer area to obtain a second image. The size of the second image is the same as the rendering size of the screen buffer, that is, the size ratio between the second image and the first image is the same as the rendering size ratio.
The algorithm used for the pixel interpolation processing in the embodiment of the present application is not limited, and may be, for example, a nearest neighbor interpolation algorithm, a linear interpolation algorithm, a bilinear interpolation algorithm, or a bicubic interpolation algorithm.
As shown in fig. 4C, in the embodiment of the present application, by combining the mechanisms of color mixing processing and upsampling processing, accurate rendering from a small screen to a large screen can be achieved while saving the amount of computation.
In some embodiments, referring to fig. 4D, fig. 4D is a flowchart of a rendering method of a virtual scene provided in an embodiment of the present application, and based on fig. 4A, while step 102 is executed or before step 102 is executed, in step 401, traversal processing may be performed on a plurality of transparent objects in the virtual scene.
In the embodiment of the present application, the transparent object to be rendered for each pixel in the first image may be determined through a mechanism of a depth culling process. First, the traversal processing may be performed on the plurality of transparent objects in the virtual scene, for example, the traversal processing may be performed on the plurality of transparent objects in the virtual scene according to the above rendering order.
In step 402, rendering the traversed transparent object to a first depth buffer area to obtain a first depth image; the first depth buffer area corresponds to the same rendering size as the screen buffer area, and the first depth image comprises a plurality of pixels and first depths respectively corresponding to the pixels.
And rendering the traversed transparent object to a first depth buffer area to obtain a first depth image, wherein the first depth buffer area corresponds to the same rendering size as the screen buffer area, and the first depth buffer area is used for storing depth information in the virtual scene. The obtained first depth image includes a plurality of pixels and depths corresponding to the plurality of pixels, and the depth in the first depth image is named a first depth for convenience of distinction.
In step 403, the first depth image in the first depth buffer is downsampled to obtain a plurality of downsampled depths.
Since the transparency in the virtual scene needs to be rendered to the color buffer corresponding to a different rendering size than the first depth buffer, in order to determine the transparency that needs to be rendered for each pixel in the first image, the first depth image is downsampled to a plurality of downsampled depths, where one downsampled depth corresponds to a plurality of first depths in the first depth image.
In some embodiments, the downsampling of the first depth image in the first depth buffer to obtain a plurality of downsampled depths may be implemented by: determining a down-sampling window according to the rendering size ratio between the screen buffer area and the color buffer area; and performing sliding processing in the first depth image according to the down-sampling window, and performing depth fusion processing on the first depth of the pixel covered by the down-sampling window after each sliding processing to obtain the down-sampling depth.
In the embodiment of the application, the first depth image may be downsampled according to a rendering size ratio between the screen buffer and the color buffer. For example, the number of pixels covered by the downsampling window is first determined according to the rendering size ratio between the screen buffer and the color buffer, for example, when the rendering size of the screen buffer: rendering size of color buffer 4: 1, determining that the downsampling window covers 4 pixels, wherein the shape of the downsampling window is not limited, and may be a square, for example.
And then, performing sliding processing in the first depth image according to the determined down-sampling window, and performing depth fusion processing on the first depth of the plurality of pixels covered by the down-sampling window after each sliding processing to obtain the down-sampling depth until the down-sampling window covers all the pixels in the first depth image. The downsampling window can be set not to cover the covered pixels, so that the downsampling accuracy is improved; the depth fusion processing can be average processing, maximum value taking or minimum value taking and the like, and experiment verifies that the minimum value can obtain better boundary effect.
In step 404, when any of the down-sampled depths is less than a second depth corresponding to the same pixel position in the second depth image, the second depth is updated according to any of the down-sampled depths and the traversed transparent object is allowed to be rendered at the same pixel position in the first image.
For each downsampled depth (taking any downsampled depth as an example) obtained by the downsampling process, when any downsampled depth is smaller than a second depth corresponding to the same pixel position in the second depth image, the second depth is updated according to the any downsampled depth (namely, the second depth is replaced by the any downsampled depth), and the traversed transparent object is allowed to be rendered at the same pixel position of the first image.
It is worth to be noted that the second depth image is located in the second depth buffer, wherein the rendering size ratio between the screen buffer and the color buffer is equal to the rendering size ratio between the first depth buffer and the second depth buffer. When initializing the second depth image, the depth (second depth) of each included pixel may be initialized to a maximum value within the depth value range for subsequent update.
In step 405, when any one of the down-sampled depths is greater than or equal to the second depth, keeping the second depth unchanged and prohibiting the traversed transparent object from being rendered at the same pixel position of the first image; the second depth image is stored in a second depth buffer area, and the rendering size of the second depth buffer area is smaller than that of the first depth buffer area.
When any one of the down-sampled depths is greater than or equal to a second depth corresponding to the same pixel location in the second depth image, the second depth is kept unchanged and rendering of the traversed transparent object at the same pixel location in the first image is inhibited.
As shown in fig. 4D, in the embodiment of the present application, through a mechanism of depth elimination, a transparent object to be rendered at each pixel in the first image can be accurately determined, so as to ensure accuracy of the rendering process, and meanwhile, computational resources in the rendering process can be saved.
In some embodiments, referring to fig. 4E, fig. 4E is a flowchart illustrating a rendering method of a virtual scene provided in this embodiment, step 102 shown in fig. 4A may be updated to step 501, in step 501, an optimized transparent object in the virtual scene is rendered to a color buffer to obtain a first image; wherein the optimized transparent object comprises any one of the following: all transparent objects in the virtual scene, transparent objects in the virtual scene having optimized rendering parameters.
Here, the optimized transparent objects in the virtual scene may be rendered to the color buffer to obtain the first image, specifically, all the transparent objects in the virtual scene may be rendered to the color buffer, or the transparent objects having the optimized rendering parameters in the virtual scene may be rendered to the color buffer.
In some embodiments, before step 103, further comprising: rendering non-optimized transparent objects and non-transparent objects in the virtual scene to a screen buffer.
For example, non-optimized transparent objects and non-transparent objects in a virtual scene may be rendered to a screen buffer, resulting in an intermediate image. Then, rendering the first image in the color buffer area to a screen buffer area (namely, rendering to the obtained intermediate image), and performing up-sampling processing in the rendering process to obtain a second image. Finally, the second image in the screen buffer is rendered to the screen to display the second image in the screen.
As shown in fig. 4E, while, before, or after the step 501 is executed, in a step 502, non-optimized transparent objects and non-transparent objects in the virtual scene may be rendered to a non-optimized buffer area, so as to obtain a third image; and the non-optimized buffer area and the screen buffer area correspond to the same rendering size.
For objects in the virtual scene that are distinguished from the optimized transparent objects, large screen rendering may be turned on. For example, non-optimized transparent objects and non-transparent objects in the virtual scene may be rendered to a non-optimized buffer area to obtain a third image, where the non-optimized transparent objects are transparent objects different from the optimized transparent objects, and the non-transparent objects are objects different from the transparent objects; the non-optimized buffer area corresponds to the same rendering size as the screen buffer area, and the non-optimized buffer area can be a new screen buffer area or a RenderTarget.
It is worth mentioning that the non-optimized transparent objects and the non-transparent objects in the virtual scene may be rendered into the same non-optimized buffer area, or the non-optimized transparent objects and the non-transparent objects in the virtual scene may be rendered into different non-optimized buffer areas, which is not limited in the embodiment of the present application.
In fig. 4E, the step 104 shown in fig. 4A may be updated to step 503, and in step 503, the second image in the screen buffer and the third image in the non-optimized buffer are rendered to the screen together, so that the second image and the third image are displayed in a superimposed manner in the screen.
For example, the second image in the screen buffer and the third image in the non-optimized buffer are rendered to the screen together, so that the second image and the third image are displayed in a superimposed manner in the screen, that is, all objects in the virtual scene are displayed, and the rendering integrity of the virtual scene is improved.
As shown in fig. 4E, in the embodiment of the present application, a large-screen rendering is performed on an object different from an optimized transparent object, so that the integrity of the rendering can be improved, and all objects (except objects prohibited from rendering) in a virtual scene can be displayed on a screen.
Next, an exemplary application of the embodiments of the present application in an actual application scenario will be described. First, the rendering principle of a virtual scene will be described in step form with reference to the schematic diagram shown in fig. 5.
1) A render target (RenderTarget) or screen buffer is set as a palette. The RenderTarget refers to an intermediate buffer area used for storing image data in the image rendering process, and corresponds to the color buffer area. Here, the render size of RenderTarget and the render size of the screen buffer are both consistent with the screen size.
2) Rendering non-transparent objects in a virtual scene into a drawing board, and when all the non-transparent objects are rendered, setting the Color of a certain pixel (x, y) in a transparent object rendering area as Color0To be mixed as a background Color of the transparent object 1bg
3) Preparing to render the transparent object 1 in the virtual scene, as shown in fig. 5, the blending formula of the transparent object 1 is set to Colorfinal=Color1*Alpha1+Color0*(1-Alpha1) Wherein the Color of the transparent object 1 at the pixel (x, y) is Color1(corresponding to the Source Color in FIG. 5, i.e., Source Color), the transparency is Alpha1(corresponding to Source transparency in FIG. 5, Source Alpha); color0Namely, the background Colorbg(corresponding to the target Color in FIG. 5, i.e., Dest Color), 1-Alpha1Can be considered as the transparency of the pixel (x, y); the Color calculated by the mixing formula is Colorfinal,ColorfinalFor a new color as pixel (x, y). For transparent objects 2 that need to be rendered in pixels (x, y), ColorfinalIt becomes a new Colorbg
4) Preparing to render the transparent object 2 in the virtual scene, the blending formula being similar to that of the transparent object 1, then after rendering the transparent object 2 by the pixel (x, y), the Color of the pixel (x, y) is updated to Colorfinal=(Color1*Alpha1+Color0*(1-Alpha1))(1-Alpha2)+Color2*Alpha2
5) And so on, the final Color of the pixel (x, y) is obtained through rendering of a plurality of transparent objectsfinalAnd outputting the data to a screen to finish rendering.
However, in the case that the virtual scene includes more transparent objects (for example, a certain fighting sub-scene in the game virtual scene includes many transparent particles), the transparent objects are rendered in a RenderTarget or screen buffer area with high precision (rendering size is large), and since the transparent objects cannot be occluded and removed, the overlapping of the transparent objects is serious, and there are too many pixels (one pixel draws many times) for calculating color and transparency, which puts more stress on the GPU. Therefore, in the embodiment of the present application, an off-screen rendering (small-screen rendering) manner is adopted, and a transparent object which is time-consuming to render in a virtual scene is rendered to a RenderTarget whose rendering size is smaller than a screen buffer area, where rendering can be completed at a faster speed because the rendering area is small. And then, the rendering result in the RenderTarget is up-sampled (Upesample) back to the screen, so that the drawing process of the transparent object can be obviously shortened, the overall performance of the rendering process is improved, and the user experience is improved.
For example, a transparent object with little influence on the resolution may be rendered into a RenderTarget with a rendering size of only 1/4, so that the total number of pixels to be calculated is greatly reduced, the rendering performance of the part of the transparent object is significantly improved, and the picture effect can be ensured to a certain extent. Among them, transparent objects that do not affect the resolution much may include transparent objects that are excessively natural (e.g., water), and transparent objects that have a short duration (e.g., transparent particles). In addition, as shown in fig. 6, in the case that the number of transparent objects included in the virtual scene is small (for example, the number of transparent objects is less than or equal to the number threshold), the small-screen rendering may be turned off through the dynamic switch, so as to achieve a better rendering effect.
The solution provided by the embodiment of the present application can be integrated in a rendering component of a virtual scene engine (such as a ghost 4 engine), and supports turning on and off through at least one of the following two strategies.
1) The opening and closing are controlled through console variables of the virtual scene engine, for example, related personnel (such as production personnel, planning personnel and the like) of the virtual scene can carry out one-to-one configuration on a plurality of sub-scenes (such as a copy sub-scene and a world area sub-scene) in the virtual scene, and small-screen rendering can be set to be opened for the sub-scenes which relate to a large amount of battles (or interactions) and/or have high rendering pressure on transparent objects; for a single person sub-scene of the roadmap, the small screen rendering can be set to be turned off.
2) Whether to start small screen rendering is determined by automatic detection of the number of transparent objects. For example, the number of all transparent objects needing to be rendered to the small screen in the virtual scene can be determined, the number is compared with a preset number threshold, and if the number is larger than the number threshold, the small screen rendering is started to relieve the rendering pressure; if the number is less than or equal to the number threshold, the small screen rendering is turned off.
In the embodiment of the present application, the rendering parameter of the transparent object may default to Auto, where Auto is used to indicate that the transparent particles need to be rendered to a small screen, and other transparent objects do not need to be rendered to a small screen (i.e. a large screen rendering is needed, and the large screen refers to the size of the screen). Of course, the transparent object may be forced to be specified to be rendered to a small screen, for example, the rendering parameter is set to Force Down-sample Pass (corresponding to the above optimized rendering parameter); the transparent object may also be forced to specify that rendering is required to the large screen, for example, the rendering parameter is set to Force instance Down-sample Pass.
As an example, the present application provides a rendering schematic diagram of a virtual scene as shown in fig. 7, an optimized transparent object (here, a transparent object with optimized rendering parameters) in the virtual scene is rendered to a RenderTarget, and then upsampled into a screen buffer, and simultaneously, a non-optimized transparent object and a non-transparent object in the virtual scene are rendered into another screen buffer (corresponding to the above non-optimized buffer), and finally, a plurality of screen buffers are integrated to perform a superimposed output of a picture for displaying on a screen. The size of the screen is 1366 × 752, and the rendering size of the RenderTarget is 668 × 376, which is 1/4 of the screen size; fig. 7 shows a strategy of reading color data required by the GPU from the memory into the GPU local storage, where Load refers to copying data from the memory to the GPU local storage (equivalent to memcpy at the CPU), dontcae refers to discarding data without performing any operation (equivalent to keeping the GPU local storage in an original state), Clear refers to discarding data, and sets the GPU local storage to the same default value (equivalent to memset at the CPU); the Store Action shown in fig. 7 refers to a strategy of writing a RenderTarget in a local Store back to the memory after the GPU is completed, where Store refers to a complete copy from the GPU local Store to the memory, DontCare refers to no operation, and data of the GPU local Store may be overwritten by other operations.
Next, a rendering flow of a virtual scene in the embodiment of the present application will be described with reference to fig. 7, and for ease of understanding, the description will be made in the form of steps.
1) An intermediate structure for saving the small-screen data is prepared. Here, a color buffer (i.e., color RenderTarget) having a rendering size smaller than that of the screen buffer may be created, the color buffer being used to store intermediate data such as color and transparency in the small-screen rendering process.
2) And preparing for deep elimination of the small screen rendering process. For example, a first depth buffer (depth buffer or depth RenderTarget) corresponding to the virtual scene is determined, the first depth buffer is used for storing depth information in the virtual scene, and the rendering size is consistent with the screen size. Then, down-sampling (down) processing is performed on the data in the first depth buffer to obtain a second depth buffer, wherein one pixel in a second depth image of the second depth buffer corresponds to four pixels in the first depth image of the first depth buffer. In the process of downsampling, sliding processing may be performed in the first depth image according to a downsampling window including four pixels, and depth fusion processing may be performed on depths of the four pixels covered by the downsampling window after each sliding processing, so as to obtain a depth of a corresponding pixel in the second depth image. The depth fusion process may be maximum or minimum.
As an example, the present embodiment provides a comparative schematic diagram as shown in fig. 8, and in fig. 8, an image 81 finally displayed in a screen in the case where the maximum value is taken at the time of the down-sampling process is shown; also shown is the image 82 that is finally displayed in the screen in the case where the minimum value is taken at the time of the down-sampling process. As can be confirmed from fig. 8, a better boundary effect can be obtained when the minimum value is taken, and therefore, the depth that is the smallest among the four depths can be taken as the depth in the second depth image in the downsampling process, and the pseudo code is as follows:
Depth0=Texture2DSample(SourceTexture,SourceTextureSampler,LeftUpUV).r;
Depth1=Texture2DSample(SourceTexture,SourceTextureSampler,LeftDownUV).r;
Depth2=Texture2DSample(SourceTexture,SourceTextureSampler,RightUpUV).r;
Depth3=Texture2DSample(SourceTexture,SourceTextureSampler,RightDownUV).r;
OutDepth=min(min(Depth1,Depth0),min(Depth2,Depth3))
wherein, SourceTexture refers to a first depth buffer area, which is a 2-dimensional depth data storage area, and a value specifying a vertical coordinate and a horizontal coordinate such as LeftUpUV needs to be informed to extract one of the depths; the SourceTextureSampler is a sampler for taking data from the first depth buffer area, and specifies a method for taking the data, for example, a single depth return can be obtained according to the coordinate of LeftUpUV, or a plurality of surrounding depths can be obtained according to the coordinate, and the depth returns after average processing is carried out, and the specific operation is determined by the logic in the sampler; the leftUpUV refers to the coordinate corresponding to the depth at the upper left corner of the four depths covered by the down-sampling window, and the other coordinates are the same; texture2Dsample refers to a function that fetches data in a specified buffer to obtain a value of a corresponding coordinate.
3) And judging whether to start the small screen rendering. For example, the comprehensive judgment can be performed through two conditions, and the small screen rendering is started when the two conditions are met simultaneously. When each frame is used for rendering a virtual scene, the pseudo code for judging whether to start small-screen rendering is as follows:
for all transparent objects in the virtual scene, the following processing is performed: {
When the rendering parameter of the transparent object is an optimized rendering parameter (representing that small-screen rendering is required), counting the number of the small screens by + 1; }
When the console variable of the virtual scene indicates that small screen rendering is performed and the small screen count is greater than the number threshold, executing the following processing: {
Starting small screen rendering; }
Where the small screen count may be initialized to zero at initialization.
4) And setting a mixed formula of small-screen rendering and Upesample screen return.
According to the transparent object rendering process shown in fig. 5, it can be determined that the mixing formula of the 2 transparent objects with the background color superimposed is as follows:
Colorfinal=(Color1*Alpha1+Color0*(1-Alpha1))(1-Alpha2)+Color2*Alpha2
here, the background Color0The iteration is always engaged at the beginning of the first mixing, but in small-screen rendering, the background Color of the large screen0Is a mixture which participates when the screen is restored back to the large screen, so if the Color of the small screen and the background Color of the large screen are required to be used0The effect of rendering with a normal transparent object is obtained after mixing, and the mixing formula needs to be changed. By modifying the above mixing formula, we can obtain:
=>Colorfinal=Color1*Alpha1*(1-Alpha2)+Color0*(1-Alpha1)*(1-Alpha2)+Color2*Alpha2
here, if a of each iteration is retained at the time of small screen rendering ═ 1-Alpha1)*(1-Alpha2) Into the A channel of the RGBA four-channel, while the RGB channel stores RGB ═ Color1*Alpha1*(1-Alpha2)+Color2*Alpha2Then, when Upsample returns to the large screen, the same calculation result as the normal large screen rendering process can be obtained.
Thus, the blending formula for small screen rendering can be determined as:
RGB=SourceColor*SourceAlpha+DestColor*ReverseSourceAlpha
A=DestAlpha*(1-SourceAlpha)+0
wherein SourceColor represents the color of the transparent object to be rendered at this time, SourceAlpha represents the transparency of the transparent object to be rendered at this time, DestColor represents the color (background color) of the pixel, DestAlpha represents the transparency (background transparency) of the pixel, and ReverseSourceAlpha represents the complementary transparency of the transparent object to be rendered at this time, namely 1-SourceAlpha. The calculated A is the DestAlpha when the next transparent object is rendered.
It is worth noting that in the formula for calculating the value of a, the value 0 is a format requirement in engineering code, i.e. in engineering code, the mixing operation is generally a two term addition.
In addition, it can also be determined that the mixing formula of rendering the small screen back to the large screen is as follows:
RGB=DestColor*SourceAlpha+SourceColor
here, DestColor refers to a color (background color) of a large screen, SourceColor refers to a color finally calculated in a small screen, and SourceAlpha refers to a transparency finally calculated in a small screen.
For easy understanding, the rendering is (Color)1,Alpha1) And (Color)2,Alpha2) The two transparent objects of (2) are explained as examples. First, each pixel in an image of a small screen (corresponding to the first image above) is initialized to DestColor 0 and DestAlpha 1.
Then at the first rendering, i.e. rendering transparent objects (Color)1,Alpha1) The mixing formula is:
Colorfinal=Color1*Alpha1+0*(1-Alpha1)=Color1*Alpha1
A=1*(1-Alpha1)=1-Alpha1
in a second rendering, i.e. rendering transparent objects (Color)2,Alpha2) The mixing formula is:
Colorfinal=Color2*Alpha2+Color1*Alpha1*(1-Alpha2)
A=1-Alpha1*(1-Alpha2)
when rendering back to the large screen, the mixing formula is as follows:
Colorfinal=Color0*(1-Alpha1*(1-Alpha2))+Color2*Alpha2+Color1*Alpha1*(1-Alpha2)
this is consistent with the colors obtained from normal large screen rendering as described above.
It is worth to be noted that when rendering the small screen back to the large screen, the small screen is rendered to the screen buffer area first, and then rendered to the screen through the screen buffer area, wherein when rendering to the screen buffer area, the length and the width of the image need to be stretched twice, and the extra pixels can be used for calculating RGB by interpolation.
It should be noted that when rendering the small screen back to the large screen, the screen can be displayed by the obtained RGB, that is, the transparency effect is embodied by the RGB. And finally, the calculated transparency is ignored, the transparency during display can be set to be 0 by default, namely, the display window of the current virtual scene does not need to be transparent to see the desktop behind, so that the data transmission process aiming at the transparency can be omitted, and the transmission pressure is reduced.
5) Bandwidth and memory related optimizations. In the small screen rendering process, depth elimination processing can be performed through the second depth buffer area, so that the number of pixels needing to be calculated is reduced, and the use of a memory and a bandwidth is optimized. For example, the pixels in two transparent objects, namely a transparent object a and a transparent object B, are far and near, the pixels in a include two pixels Ap1 and Ap2, the pixels in B include two pixels Bp1 and Bp2, and when displaying, the Bp1 may be shielded by Ap1, and the Bp2 may be shielded by Bp2, that is, both the transparent objects can see one part and the other part is shielded. For this reason, if the depth culling processing is not performed, only drawing a first and then drawing B results in that the pixels of B completely cover a, or drawing B first and then drawing a results in that the pixels of a completely cover B, both of which may result in an error in the drawing result. Therefore, the embodiment of the present application provides a mechanism for relatively far and near pixels, that is, a depth culling process, for example, when a is drawn before, writing the depths of Ap1 and Ap2 in the second depth buffer, and then finding that the Ap1 at the same position (i.e., the same pixel position) is closer to the viewpoint (the depth is smaller) when the Bp1 is drawn, the color calculation result of the Bp1 may be discarded, or the color of the Bp1 does not need to be calculated at all, so that the Ap1 can be located at the front. When the Bp2 is drawn, the Bp2 is found to be closer to the observation point than Ap2, and the color of Bp2 is calculated and covered with the color of Ap2 (i.e., color mixing, transparency mixing are performed). Therefore, Ap1 and Bp2 can be drawn correctly and displayed on the screen without error.
In addition, the identifier Memoryless may be set (for example, set in the iOS device) when creating the RenderTarget, so that the CPU does not open up RT memory, and the GPU uses Tile memory space to create the RenderTarget, where Tile memory space is a high-speed memory space. Therefore, the render target only occupies the space of the graphics card and does not occupy the space of the memory, so that the problems of low data transmission efficiency and large power consumption caused by back-and-forth copying of data between the GPU and the memory can be effectively solved, and the computing resource overhead in the rendering process is reduced.
The embodiment of the application has at least the following technical effects: 1) for convenience of description, an image rendered according to the scheme provided by the related art shown in fig. 9A is shown, and the rendering time of the image is 7.6 milliseconds, and an image rendered according to the scheme provided by the embodiment of the present application shown in fig. 9B is shown, and the rendering time of the image is 4 milliseconds; 2) the computational load of the GPU is effectively reduced, fig. 10A shows a GPU usage rate (guage) in the rendering process of the image corresponding to fig. 9A, an example value is 65%, and fig. 10B shows a GPU usage rate in the rendering process of the image corresponding to fig. 9B, an example value is 32%, so that the GPU load is significantly reduced by about 33% by the embodiment of the present application.
Besides, in the embodiment of the application, the rendering pressure can be reduced by reducing the complexity of the transparent object, reducing material nodes, reducing a loader complex function and the like, and the rendering efficiency is improved.
Continuing with the exemplary structure of the virtual scene rendering device 455 provided by the embodiment of the present application implemented as software modules, in some embodiments, as shown in fig. 2, the software modules stored in the virtual scene rendering device 455 of the memory 450 may include: a creating module 4551 for creating a color buffer having a rendering size smaller than that of the screen buffer; a first rendering module 4552, configured to render a transparent object in a virtual scene to a color buffer to obtain a first image; a second rendering module 4553, configured to render the first image in the color buffer to the screen buffer, and perform upsampling processing during the rendering process to obtain a second image; a screen rendering module 4554, configured to render the second image in the screen buffer to the screen, so as to display the second image in the screen.
In some embodiments, the first rendering module 4552 is further configured to: traversing a plurality of transparent objects corresponding to a first pixel in a first image, and executing the following processing according to the traversed transparent objects: performing color mixing processing according to the traversed color of the transparent object, the traversed transparency of the transparent object, the color of the first pixel and the traversed complementary transparency of the transparent object to obtain a new color of the first pixel; performing transparency mixing processing according to the traversed transparency of the transparent object and the transparency of the first pixel to obtain new transparency of the first pixel; wherein the first pixel represents any one pixel in the first image.
In some embodiments, the rendering apparatus 455 of the virtual scene further includes a sorting module, configured to sort, according to the transparent object parameter, the plurality of transparent objects in the virtual scene to obtain a rendering order; wherein the transparent object parameter comprises at least one of depth, volume, and complexity; the first rendering module 4552 is further configured to perform traversal processing on the plurality of transparent objects corresponding to the first pixel according to the rendering order.
In some embodiments, the second rendering module 4553 is further configured to: performing color mixing processing according to the color of the first pixel in the first image, the transparency of the first pixel and the color of the intermediate pixel in the intermediate image to obtain a new color of the intermediate pixel; the first pixel represents any pixel in the first image, and the first pixel corresponds to the same pixel position as the middle pixel; the intermediate image is stored in a screen buffer and is used for performing upsampling processing to obtain a second image.
In some embodiments, the first rendering module 4552 is further configured to render the transparent objects in the virtual scene to the color buffer when the optimized rendering condition is satisfied; wherein the optimized rendering condition comprises at least one of: a sub-scene to be rendered of the virtual scene belongs to an optimized sub-scene; wherein the optimized sub-scenes comprise at least part of the sub-scenes in the virtual scene; the number of transparent objects with optimized rendering parameters in the virtual scene is greater than a number threshold; the scene parameter of the virtual scene is larger than the scene parameter threshold value; wherein the scene parameters include at least one of interaction parameters of the virtual objects, a number of the virtual objects, and device resource usage parameters.
In some embodiments, the virtual scene includes a plurality of sub-scenes; the rendering device 455 of the virtual scene further comprises a scene optimization module for performing at least one of the following processes: in response to an optimized rendering configuration operation for at least part of the sub-scenes in the virtual scene, treating at least part of the sub-scenes as optimized sub-scenes; and screening the multiple sub-scenes according to the historical scene parameters respectively corresponding to the multiple sub-scenes to obtain an optimized sub-scene.
In some embodiments, the rendering device 455 of the virtual scene further comprises a transparent object optimization module for performing at least one of the following processes: updating rendering parameters of at least partially transparent objects in the virtual scene to optimized rendering parameters in response to an optimized rendering configuration operation for the at least partially transparent objects; determining the types of a plurality of transparent objects in the virtual scene, and updating the rendering parameters of the transparent objects meeting the type conditions into optimized rendering parameters.
In some embodiments, the rendering device 455 of the virtual scene further comprises a depth culling module for: traversing a plurality of transparent objects in the virtual scene, and executing the following processing according to the traversed transparent objects: rendering the traversed transparent object to a first depth buffer area to obtain a first depth image; the first depth buffer area corresponds to the same rendering size as the screen buffer area, and the first depth image comprises a plurality of pixels and first depths respectively corresponding to the pixels; performing downsampling processing on the first depth image in the first depth buffer area to obtain a plurality of downsampling depths; updating the second depth according to any one of the down-sampled depths when the any one of the down-sampled depths is less than a second depth corresponding to the same pixel position in the second depth image, and allowing the traversed transparent object to be rendered at the same pixel position in the first image; when any one of the down-sampling depths is greater than or equal to the second depth, keeping the second depth unchanged, and forbidding the traversed transparent object to be rendered at the same pixel position of the first image; the second depth image is stored in a second depth buffer area, and the rendering size of the second depth buffer area is smaller than that of the first depth buffer area.
In some embodiments, the depth culling module is further to: determining a down-sampling window according to the rendering size ratio between the screen buffer area and the color buffer area; and performing sliding processing in the first depth image according to the down-sampling window, and performing depth fusion processing on the first depth of the pixel covered by the down-sampling window after each sliding processing to obtain the down-sampling depth.
In some embodiments, the first rendering module 4552 is further configured to: rendering the optimized transparent objects in the virtual scene to a color buffer area; wherein the optimized transparent object comprises any one of the following: all transparent objects in the virtual scene, and the transparent objects with optimized rendering parameters in the virtual scene; the second rendering module 4553 is further configured to: rendering non-optimized transparent objects and non-transparent objects in the virtual scene to the screen buffer.
In some embodiments, the first rendering module 4552 is further configured to: rendering the optimized transparent objects in the virtual scene to a color buffer area; wherein the optimized transparent object comprises any one of the following: all transparent objects in the virtual scene, and the transparent objects with optimized rendering parameters in the virtual scene; the rendering device 455 of the virtual scene further comprises a large screen rendering module for: rendering the non-optimized transparent object and the non-transparent object in the virtual scene to a non-optimized buffer area to obtain a third image; the non-optimization buffer area and the screen buffer area correspond to the same rendering size; the screen rendering module 4554 is further configured to render the second image in the screen buffer and the third image in the non-optimized buffer to the screen together, so as to display the second image and the third image in the screen in an overlapping manner.
In some embodiments, the second rendering module 4553 is further configured to: according to the rendering size ratio between the screen buffer area and the color buffer area, pixel interpolation processing is carried out on an intermediate image obtained based on the rendering of the first image to obtain a second image; and the size of the second image is the same as the rendering size of the screen buffer area.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions (i.e., executable instructions) stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the electronic device executes the method for rendering the virtual scene according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, cause the processor to perform a rendering method of a virtual scene provided by embodiments of the present application, for example, the rendering method of a virtual scene as shown in fig. 4A, 4B, 4C, 4D, and 4E.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A method for rendering a virtual scene, the method comprising:
creating a color buffer area with a rendering size smaller than that of the screen buffer area;
rendering a transparent object in the virtual scene to the color buffer area to obtain a first image;
rendering the first image in the color buffer area to the screen buffer area, and performing up-sampling processing in the rendering process to obtain a second image;
rendering the second image in the screen buffer to a screen to display the second image in the screen.
2. The method of claim 1, wherein rendering transparent objects in the virtual scene to the color buffer results in a first image comprising:
traversing a plurality of transparent objects corresponding to a first pixel in the first image, and executing the following processing according to the traversed transparent objects:
performing color mixing processing according to the color of the traversed transparent object, the transparency of the traversed transparent object, the color of the first pixel and the complementary transparency of the traversed transparent object to obtain a new color of the first pixel;
performing transparency mixing processing according to the transparency of the traversed transparent object and the transparency of the first pixel to obtain new transparency of the first pixel;
wherein the first pixel represents any one pixel in the first image.
3. The method of claim 2, wherein before traversing the plurality of transparent objects corresponding to the first pixel in the first image, the method further comprises:
sequencing a plurality of transparent objects in the virtual scene according to the transparent object parameters to obtain a rendering sequence;
wherein the transparent object parameter comprises at least one of depth, volume, and complexity;
the traversing the plurality of transparent objects corresponding to the first pixel in the first image includes:
and traversing the plurality of transparent objects corresponding to the first pixel according to the rendering sequence.
4. The method of claim 1, wherein the rendering the first image in the color buffer to the screen buffer comprises:
performing color mixing processing according to the color of the first pixel in the first image, the transparency of the first pixel and the color of the intermediate pixel in the intermediate image to obtain a new color of the intermediate pixel;
the first pixel represents any one pixel in the first image, and corresponds to the same pixel position as the intermediate pixel;
wherein the intermediate image is stored in the screen buffer and used for performing the upsampling process to obtain the second image.
5. The method of any of claims 1 to 4, wherein the rendering transparent objects in the virtual scene to the color buffer comprises:
when an optimized rendering condition is met, rendering transparent objects in the virtual scene to the color buffer area;
wherein the optimized rendering condition comprises at least one of:
the sub-scene to be rendered of the virtual scene belongs to an optimized sub-scene; wherein the optimized sub-scenes comprise at least some of the sub-scenes in the virtual scene;
the number of transparent objects with optimized rendering parameters in the virtual scene is greater than a number threshold;
the scene parameter of the virtual scene is greater than a scene parameter threshold; wherein the scene parameters include at least one of interaction parameters of virtual objects, a number of virtual objects, and device resource usage parameters.
6. The method of claim 5, wherein the virtual scene comprises a plurality of sub-scenes; the method further comprises the following steps:
performing at least one of:
in response to an optimized rendering configuration operation for at least some of the sub-scenes in the virtual scene, treating the at least some of the sub-scenes as optimized sub-scenes;
and screening the plurality of sub-scenes according to the historical scene parameters respectively corresponding to the plurality of sub-scenes to obtain an optimized sub-scene.
7. The method of claim 5, further comprising:
performing at least one of:
updating rendering parameters of at least partially transparent objects in the virtual scene to the optimized rendering parameters in response to an optimized rendering configuration operation for the at least partially transparent objects;
determining the types of a plurality of transparent objects in the virtual scene, and updating the rendering parameters of the transparent objects meeting the type conditions into the optimized rendering parameters.
8. The method according to any one of claims 1 to 4, further comprising:
traversing a plurality of transparent objects in the virtual scene, and executing the following processing according to the traversed transparent objects:
rendering the traversed transparent object to a first depth buffer area to obtain a first depth image; the first depth buffer area corresponds to the same rendering size as the screen buffer area, and the first depth image comprises a plurality of pixels and first depths respectively corresponding to the pixels;
performing downsampling processing on the first depth image in the first depth buffer area to obtain a plurality of downsampling depths;
when any one of the down-sampled depths is smaller than a second depth corresponding to the same pixel position in a second depth image, updating the second depth according to the any one of the down-sampled depths, and allowing the traversed transparent object to be rendered at the same pixel position of the first image;
when the any one down-sampled depth is greater than or equal to the second depth, keeping the second depth unchanged and prohibiting the traversed transparent object from being rendered at the same pixel position of the first image;
wherein the second depth image is stored in a second depth buffer, and a rendering size of the second depth buffer is smaller than the first depth buffer.
9. The method of claim 8, wherein downsampling the first depth image in the first depth buffer to obtain a plurality of downsampled depths comprises:
determining a down-sampling window according to the rendering size ratio between the screen buffer area and the color buffer area;
and performing sliding processing in the first depth image according to the downsampling window, and performing depth fusion processing on the first depth of the pixels covered by the downsampling window after each sliding processing to obtain the downsampling depth.
10. The method of any of claims 1 to 4, wherein the rendering transparent objects in the virtual scene to the color buffer comprises:
rendering optimized transparent objects in the virtual scene to the color buffer;
wherein the optimized transparent object comprises any one of: all transparent objects in the virtual scene, transparent objects in the virtual scene having optimized rendering parameters;
before the rendering the first image in the color buffer to the screen buffer, the method further comprises:
rendering non-optimized transparent objects and non-transparent objects in the virtual scene to the screen buffer.
11. The method of any of claims 1 to 4, wherein the rendering transparent objects in the virtual scene to the color buffer comprises:
rendering optimized transparent objects in the virtual scene to the color buffer;
wherein the optimized transparent object comprises any one of: all transparent objects in the virtual scene, transparent objects in the virtual scene having optimized rendering parameters;
the method further comprises the following steps:
rendering the non-optimized transparent object and the non-transparent object in the virtual scene to a non-optimized buffer area to obtain a third image; wherein the non-optimized buffer corresponds to the same rendering size as the screen buffer;
wherein the second image and the third image are used for being jointly rendered to the screen so as to be displayed in an overlapping manner in the screen.
12. The method according to any one of claims 1 to 4, wherein the performing an upsampling process during the rendering process to obtain the second image comprises:
according to the rendering size ratio between the screen buffer area and the color buffer area, performing pixel interpolation processing on an intermediate image obtained based on the rendering of the first image to obtain a second image;
wherein the size of the second image is the same as the rendered size of the screen buffer.
13. An apparatus for rendering a virtual scene, the apparatus comprising:
the creating module is used for creating a color buffer area with a rendering size smaller than that of the screen buffer area;
the first rendering module is used for rendering the transparent object in the virtual scene to the color buffer area to obtain a first image;
the second rendering module is used for rendering the first image in the color buffer area to the screen buffer area and performing up-sampling processing in the rendering process to obtain a second image;
and the screen rendering module is used for rendering the second image in the screen buffer area to a screen so as to display the second image in the screen.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing a method of rendering a virtual scene as claimed in any one of claims 1 to 12 when executing executable instructions stored in the memory.
15. A computer-readable storage medium storing executable instructions for implementing a method for rendering a virtual scene according to any one of claims 1 to 12 when executed by a processor.
CN202110836252.8A 2021-07-23 2021-07-23 Virtual scene rendering method and device and electronic equipment Active CN113470153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110836252.8A CN113470153B (en) 2021-07-23 2021-07-23 Virtual scene rendering method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110836252.8A CN113470153B (en) 2021-07-23 2021-07-23 Virtual scene rendering method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113470153A true CN113470153A (en) 2021-10-01
CN113470153B CN113470153B (en) 2024-10-11

Family

ID=77882067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110836252.8A Active CN113470153B (en) 2021-07-23 2021-07-23 Virtual scene rendering method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113470153B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023098583A1 (en) * 2021-11-30 2023-06-08 华为技术有限公司 Rendering method and related device thereof
WO2023160167A1 (en) * 2022-02-28 2023-08-31 荣耀终端有限公司 Image processing method, electronic device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080122835A1 (en) * 2006-11-28 2008-05-29 Falco Jr Peter F Temporary Low Resolution Rendering of 3D Objects
WO2012076778A1 (en) * 2010-12-10 2012-06-14 Real Fusio France Method for rendering images from a three-dimensional virtual scene
CN104008525A (en) * 2014-06-06 2014-08-27 无锡梵天信息技术股份有限公司 Low-resolution particle drawing method for improving resolution based on double buffering
CN113129417A (en) * 2019-12-27 2021-07-16 华为技术有限公司 Image rendering method in panoramic application and terminal equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080122835A1 (en) * 2006-11-28 2008-05-29 Falco Jr Peter F Temporary Low Resolution Rendering of 3D Objects
WO2012076778A1 (en) * 2010-12-10 2012-06-14 Real Fusio France Method for rendering images from a three-dimensional virtual scene
CN104008525A (en) * 2014-06-06 2014-08-27 无锡梵天信息技术股份有限公司 Low-resolution particle drawing method for improving resolution based on double buffering
CN113129417A (en) * 2019-12-27 2021-07-16 华为技术有限公司 Image rendering method in panoramic application and terminal equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023098583A1 (en) * 2021-11-30 2023-06-08 华为技术有限公司 Rendering method and related device thereof
WO2023160167A1 (en) * 2022-02-28 2023-08-31 荣耀终端有限公司 Image processing method, electronic device, and storage medium

Also Published As

Publication number Publication date
CN113470153B (en) 2024-10-11

Similar Documents

Publication Publication Date Title
US7400322B1 (en) Viewport-based desktop rendering engine
KR101623288B1 (en) Rendering system, rendering server, control method thereof, program, and recording medium
EP1462936A2 (en) Visual and scene graph interfaces
CN113470153B (en) Virtual scene rendering method and device and electronic equipment
US10217259B2 (en) Method of and apparatus for graphics processing
CN101477702B (en) Built-in real tri-dimension driving method for computer display card
CN109118556B (en) Method, system and storage medium for realizing animation transition effect of UI (user interface)
CN112686939B (en) Depth image rendering method, device, equipment and computer readable storage medium
CN101477701A (en) Built-in real tri-dimension rendering process oriented to AutoCAD and 3DS MAX
JP2016502724A (en) Method for forming shell mesh based on optimized polygons
US8698830B2 (en) Image processing apparatus and method for texture-mapping an image onto a computer graphics image
CN101477700A (en) Real tri-dimension display method oriented to Google Earth and Sketch Up
Moser et al. Interactive volume rendering on mobile devices
CN114130022A (en) Method, apparatus, device, medium, and program product for displaying screen of virtual scene
US20140161173A1 (en) System and method for controlling video encoding using content information
CN115228083A (en) Resource rendering method and device
CN114359458A (en) Image rendering method, device, equipment, storage medium and program product
CA2469050A1 (en) A method of rendering a graphics image
WO2023202254A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN101511034A (en) Truly three-dimensional stereo display method facing Skyline
CN113192173B (en) Image processing method and device of three-dimensional scene and electronic equipment
CN115970275A (en) Projection processing method and device for virtual object, storage medium and electronic equipment
US20240257444A1 (en) Method and apparatus for map interaction of virtual scene, electronic device, computer-readable storage medium, and computer program product
CN116450017B (en) Display method and device for display object, electronic equipment and medium
CN114049425B (en) Illumination simulation method, device, equipment and storage medium in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40053923

Country of ref document: HK

GR01 Patent grant