CN116912379A - Scene picture rendering method and device, storage medium and electronic equipment - Google Patents

Scene picture rendering method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116912379A
CN116912379A CN202310874434.3A CN202310874434A CN116912379A CN 116912379 A CN116912379 A CN 116912379A CN 202310874434 A CN202310874434 A CN 202310874434A CN 116912379 A CN116912379 A CN 116912379A
Authority
CN
China
Prior art keywords
rendering
target
original
map
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310874434.3A
Other languages
Chinese (zh)
Inventor
谌国风
彭章祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310874434.3A priority Critical patent/CN116912379A/en
Publication of CN116912379A publication Critical patent/CN116912379A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure relates to a method and a device for rendering a scene picture, a storage medium and electronic equipment, and relates to the technical field of computers, wherein the method comprises the following steps: the first rendering process renders the original picture rendering data into a first original shared map to obtain a first target shared map; the second rendering process reads the original picture rendering data in the first target shared map according to the first map handle of the first original shared map, and performs post-processing on the original picture rendering data to obtain target picture rendering data; the second rendering process renders the target picture rendering data into a second original shared map corresponding to the second map handle according to the second map handle to obtain a second target shared map; and the first rendering process reads the target picture rendering data in the second target sharing map and renders the target picture rendering data. The present disclosure enables cross-process data rendering.

Description

Scene picture rendering method and device, storage medium and electronic equipment
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a scene picture rendering method, a scene picture rendering device, a computer readable storage medium and electronic equipment.
Background
In the existing scene picture rendering scheme, in a low-version multimedia and graphic programming interface, only basic calculation can be performed, and the post-processing of the scene picture can not be realized, so that the accuracy of the obtained scene picture is lower.
It should be noted that the information of the present invention in the above background section is only for enhancing understanding of the background of the present disclosure, and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a method of rendering a scene picture, a device of rendering a scene picture, a computer-readable storage medium, and an electronic apparatus, which further overcome, at least to some extent, the problem of lower accuracy of a scene picture due to limitations and drawbacks of the related art.
According to one aspect of the present disclosure, there is provided a method of rendering a scene picture, including:
the first rendering process renders the original picture rendering data into a first original shared map to obtain a first target shared map;
the second rendering process reads the original picture rendering data in the first target shared map according to the first map handle of the first original shared map, and performs post-processing on the original picture rendering data to obtain target picture rendering data;
The second rendering process renders the target picture rendering data into a second original shared map corresponding to the second map handle according to the second map handle to obtain a second target shared map;
and the first rendering process reads the target picture rendering data in the second target sharing map and renders the target picture rendering data.
According to an aspect of the present disclosure, there is provided a rendering apparatus of a scene picture, including:
the first data rendering module is used for rendering the original picture rendering data into a first original shared map through a first rendering process to obtain a first target shared map;
the post-processing module is used for reading the original picture rendering data in the first target shared map according to the first map handle of the first original shared map through a second rendering process, and carrying out post-processing on the original picture rendering data to obtain target picture rendering data;
the second data rendering module is used for rendering the target picture rendering data into a second original shared map corresponding to a second map handle through a second rendering process according to the second map handle to obtain a second target shared map;
And the scene picture rendering module is used for reading the target picture rendering data in the second target sharing map through a first rendering process and rendering the target picture rendering data.
According to one aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of rendering a scene picture of any one of the above.
According to one aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of rendering a scene picture as claimed in any one of the preceding claims via execution of the executable instructions.
According to the scene picture rendering method provided by the embodiment of the disclosure, on one hand, original picture rendering data are rendered into a first original sharing map through a first rendering process, so that a first target sharing map is obtained; reading original picture rendering data in the first target shared map according to a first map handle of the first original shared map through a second rendering process, and performing post-processing on the original picture rendering data to obtain target picture rendering data; rendering the target picture rendering data into a second original shared map corresponding to the second map handle through a second rendering process according to the second map handle to obtain a second target shared map; finally, the target picture rendering data in the second target shared map is read through a first rendering process, and the target picture rendering data is rendered, so that the post-processing of the original picture rendering data in a cross-process mode is realized, and then the target picture rendering data is rendered, the problem that the accuracy of the obtained scene picture is lower due to the fact that the post-processing of the scene picture cannot be realized in the prior art is solved, and the accuracy of the obtained scene picture is improved; on the other hand, the post-processing can be performed in the second rendering process, so that the purpose of improving the accuracy of the obtained scene picture is achieved on the basis that the first multimedia and the graphic programming interface in the first rendering process are not updated, the rendering process is simplified, and the rendering efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 schematically illustrates a flowchart of a method of rendering a scene picture according to an example embodiment of the present disclosure.
Fig. 2 schematically illustrates a block diagram of a rendering system of a scene cut according to an example embodiment of the present disclosure.
FIG. 3 schematically illustrates a flow chart of a specific implementation of a cross-process access in accordance with an example embodiment of the present disclosure.
Fig. 4 schematically illustrates an example diagram of a target pixel and adjacent pixels corresponding thereto according to an example embodiment of the present disclosure.
Fig. 5 schematically illustrates an example diagram of a scene display resulting from gaussian blur according to an example embodiment of the present disclosure.
Fig. 6 schematically illustrates a flowchart of a method of rendering a scene picture based on double-sided interaction according to an example embodiment of the present disclosure.
Fig. 7 schematically illustrates a block diagram of a rendering apparatus of a scene cut according to an example embodiment of the present disclosure.
Fig. 8 schematically illustrates an electronic device for implementing the above-described rendering method of a scene picture according to an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Games are typically developed based on a game engine that is responsible for displaying content on a screen, and underlying logic is to call a graphics API (Application Programming Interface ) to render primitives into memory and then draw them onto the screen. Meanwhile, in the actual application process, different graphics APIs can be supported for different software and hardware platforms; for example, for the PC (Personal Computer ) side, the most commonly used graphics APIs may include DirectX (or simply DX).
Further, for PC-side projects that run for many years over long lines, if the DirectX version used for the game is low, the actual rendering of the picture will often be limited to a certain extent. For example, since DirectX9 does not support advanced features such as ComputeShader, most special effect particles can only be made into CPU (Central Processing Unit ) particles, and cannot be realized by using efficient GPU (Graphics Processing Unit, graphics processor) particles; in addition, the number of model bones and the number of maps that can be used in a single rendering are small, and thus have a large influence on the achievable screen effect.
In order to solve the above problems, in some technical solutions, a specific rendering scheme is implemented by: one is to upgrade the DirectX version of the game use, or replace the graphics API; however, the scheme brings huge workload, has great technical risks, and is difficult to ensure the correctness of the modified logic; alternatively, the game engine is directly replaced; that is, a game engine supporting a higher version of DirectX may be used; however, since a large amount of historical art resources are accumulated in the projects operated in the long line, the historical art resources are migrated to a new engine, the effect is ensured to be consistent, and huge workload is brought.
Therefore, for the PC-side project of long-line operation, how to ensure that the development of art resources is not limited on the basis of maintaining the currently used DirectX version is a problem to be solved.
Based on this, in this exemplary embodiment, there is first provided a method for rendering a scene, where the method may run on a terminal device, a server cluster, or a cloud server, etc.; of course, those skilled in the art may also operate the methods of the present disclosure on other platforms as desired, which is not particularly limited in the present exemplary embodiment. Specifically, referring to fig. 1, the method for rendering a scene may include the following steps:
S110, rendering the original picture rendering data into a first original shared map by a first rendering process to obtain a first target shared map;
s120, the second rendering process reads the original picture rendering data in the first target shared map according to the first map handle of the first original shared map, and performs post-processing on the original picture rendering data to obtain target picture rendering data;
s130, the second rendering process renders the target picture rendering data into a second original shared map corresponding to the second map handle according to the second map handle to obtain a second target shared map;
and S140, the first rendering process reads the target picture rendering data in the second target sharing map and renders the target picture rendering data.
In the above method for rendering scene images, on one hand, rendering original image rendering data into a first original shared map through a first rendering process to obtain a first target shared map; reading original picture rendering data in the first target shared map according to a first map handle of the first original shared map through a second rendering process, and performing post-processing on the original picture rendering data to obtain target picture rendering data; rendering the target picture rendering data into a second original shared map corresponding to the second map handle through a second rendering process according to the second map handle to obtain a second target shared map; finally, the target picture rendering data in the second target shared map is read through a first rendering process, and the target picture rendering data is rendered, so that the post-processing of the original picture rendering data in a cross-process mode is realized, and then the target picture rendering data is rendered, the problem that the accuracy of the obtained scene picture is lower due to the fact that the post-processing of the scene picture cannot be realized in the prior art is solved, and the accuracy of the obtained scene picture is improved; on the other hand, the post-processing can be performed in the second rendering process, so that the purpose of improving the accuracy of the obtained scene picture is achieved on the basis that the first multimedia and the graphic programming interface in the first rendering process are not updated, the rendering process is simplified, and the rendering efficiency is improved.
Hereinafter, a method of rendering a scene picture according to an exemplary embodiment of the present disclosure will be explained and described in detail with reference to the accompanying drawings.
First, nouns referred to in the exemplary embodiments of the present disclosure are explained.
DirectX, which is also known as Direct eXtension, also simply as DX, is a multimedia and graphics programming interface; the DirectX can enable games or multimedia programs taking Windows as a platform to obtain higher execution efficiency, strengthen 3D graphics and sound effects, provide a common hardware driving standard for designers, enable game developers to not write different driving programs for each brand of hardware, and reduce the complexity of installing and setting hardware for users.
Mapping: for specifying the characteristics of the surface or surfaces of the object, which determine the characteristics of these planes when colored, such as color, brightness, self-luminosity, opacity, etc.
Handle to share map: the operating system creates the shared map when it creates data that uniquely refers to the shared map, and the shared map can be found and opened based on the handle.
Shared memory: a piece of memory area that can be accessed by multiple processes can be applied for, or access to, on the desktop side through the Windows API (Application Programming Interface ).
Rendering: refers to the process of drawing logical primitives (e.g., triangles or rectangles, etc.) onto a screen through a graphics API; meanwhile, in order to achieve a special rendering effect, operations such as some corresponding mathematical calculations and transformations are generally required.
Next, technical principles of exemplary embodiments of the present disclosure are explained and explained. Specifically, the method for rendering the scene picture according to the exemplary embodiment of the present disclosure is a mixed rendering scheme based on multiple processes, and can use another process (using a higher version of DirectX) to render in a game, thereby not only solving the limitation, but also avoiding the workload and risk caused by upgrading the DirectX. For example, with the original game process called Process A and the rendering-assisted process called Process B, the DirectX API supports the creation of shared resources (which may include maps, vertex buffers, etc.) that support cross-process access, i.e., the shared maps created by Process A may be accessed and used by Process B; meanwhile, in cross-process sharing, process B needs to try to obtain a handle to the shared map so that the map can be opened and used through DirectX API; further, based on the API, a shared map may be created in process a, and then the map handle may be transferred to process B through an inter-process communication technique; in process B, the content to be displayed may be written onto the shared map; and then in the process A, reading the content of the shared map, and carrying out subsequent rendering processing to realize the required effect.
Further, a rendering system of a scene picture described in an exemplary embodiment of the present disclosure is explained and described. Specifically, referring to fig. 2, the rendering system of the scene may include a first rendering process 210 and a second rendering process 220; the first rendering process and the second rendering process may be connected through a shared memory area 230, or may be connected through an inter-process communication mechanism, where the communication mechanism may include Socket, pipe, or file read-write, and so on.
Wherein, the first rendering process described herein may be a primary rendering process where the game engine is located, and the second rendering process may be an auxiliary rendering process; meanwhile, a first media and graphic programming interface 211 is arranged in the first rendering process, and a second media and graphic programming interface 212 is arranged in the second rendering process; and, the second interface version of the second media and graphics programming interface described herein is higher than the first interface version of the first media and graphics programming interface; for example, the first interface version may be directx9.0, the second interface version may be directx10.0, and so on; it should be added that the second interface version is limited to be higher than the first interface version because of the need to execute rendering special effects (e.g., computer loader) in the second rendering process that cannot be realized in the first rendering process based on the higher version of the second media and graphics programming interface in the second rendering process; meanwhile, since it is difficult to upgrade the first interface version of the first media and graphic programming interface in the game engine, it is necessary to implement corresponding rendering special effects in a cross-process manner.
Further, an operating system program interface (i.e., API of Windows operating system) 212 is further provided in the first rendering process, and a shared memory program interface (i.e., shared memory API of Windows operating system) 222 is also provided in the second rendering process; in the practical application process, the API of the Windows operating system is used to create a shared memory area, and the shared memory API of the Windows operating system may be used to obtain the access rights of the shared memory area.
The specific implementation of cross-process access will be explained and illustrated below in connection with fig. 3. Specifically, referring to FIG. 3, a specific implementation of cross-process access may include the following steps:
step S310, a first rendering process creates a shared memory area through a preset operating system program interface, and creates a first original shared map and a second original shared map through a preset first multimedia and graphic programming interface;
step S320, the first rendering process configures a first map handle for the first original shared map, and configures a second map handle for a second original shared map;
step S330, the first rendering process places the first original shared map, the second original shared map, the first map handle, and the second map handle into the shared memory area;
In step S340, the second rendering process obtains the first map handle of the first original shared map and the second map handle of the second original shared map from the shared memory area based on the preset shared memory program interface.
Hereinafter, step S310 to step S340 will be explained and explained. Specifically, in the actual application process, firstly, in a first rendering process (assuming that the first rendering process is process a), a batch of maps which can be used for cross-process sharing are created through a DirectX API (preset first multimedia and graphic programming interface) to obtain handles thereof; a portion of the map is used as input representing map data (e.g., a first original shared map) passed by process a to process B; another portion would be output representing the map data passed by process B to process a (e.g., the second original shared map); secondly, in the first rendering process, creating a shared memory area (also simply called shared memory) which can be accessed across processes through an API (preset operating system program interface) of a Window operating system; meanwhile, in a second rendering process (assuming that the second rendering process is a process B), access rights to the memory area are obtained through a shared memory API (preset shared memory program interface) of the Window operating system; further, in process a, handles of all the shared maps (a first map handle of the first original shared map and a second map handle of the second original shared map) are stored into the shared memory area; meanwhile, in the process B, all sharing map handles are read, the sharing map is opened by using a DirectX API, and the access right to the sharing map is obtained. Based on this, the specific implementation of the cross-process access has been completed entirely.
Hereinafter, a method of rendering the scene picture shown in fig. 1 will be further explained and described with reference to fig. 2 and 3. Specific:
in step S110, the first rendering process renders the original frame rendering data into the first original shared map, so as to obtain a first target shared map.
Specifically, in the actual application process, the first rendering process renders the original image rendering data to the first original shared map to obtain the first target shared map, which can be implemented in the following manner: firstly, a first rendering process takes a first original shared map as a first output target through a first multimedia and image programming interface, and obtains the current scene position of a virtual lens associated with the first output target in a stereoscopic space; secondly, the first rendering process calculates a scene picture to be rendered, which can be shot by the virtual lens at the current scene position, and acquires a virtual object included in the scene picture to be rendered; the scene picture to be rendered comprises one or more virtual objects, wherein the virtual objects comprise at least one of virtual characters, virtual animals and virtual scenes; then, the first rendering process obtains an original object model and original material data of the virtual object, and obtains the original picture rendering data according to the original object model and the original material data; and finally, submitting the original picture rendering data to a graphic processing unit through a first multimedia and image programming interface by the first rendering process, and controlling a preset shader program by the graphic processing unit to store the original picture rendering data into a first original sharing map to obtain a first target sharing map.
The specific generation process of the first target sharing map will be further explained and explained below. Specifically, in the actual application process, first, in the first rendering process, a first original shared map (T1) is used as a first output target through a first multimedia and image programming interface; then, relevant data calculation is carried out to obtain a visible model in the lens range and data such as materials used by the model; in the process of performing mathematical calculation, firstly, a current scene position of a virtual lens associated with a first output target in a three-dimensional space (3D) (namely, a specified lens position in the 3D space) can be obtained; then, calculating a scene picture to be rendered, which can be shot by the virtual lens at the current scene position, based on the current scene position and the attitude angle of the virtual lens; further, virtual objects included in the scene to be rendered, an original object model and original material data of the virtual objects are obtained, and original image rendering data is obtained according to the original object model and the original material data; finally, the related data (original picture rendering data) is submitted to a Graphics Processing Unit (GPU) through a DirectX API (first multimedia and image programming interface), and then the data (original picture rendering data) such as color, normal direction, etc. of the content seen by the lens position is stored in the pixels at each corresponding position of the map T1 through a Shader program (Shader), so as to obtain a first target sharing map.
In step S120, the second rendering process reads the original picture rendering data in the first target shared map according to the first map handle of the first original shared map, and performs post-processing on the original picture rendering data to obtain target picture rendering data.
In this example embodiment, first, the second rendering process reads the original picture rendering data in the first target shared map according to the first map handle of the first original shared map. Specifically, the method can be realized by the following steps: firstly, a second rendering process obtains a first access right to a shared memory area based on a preset second multimedia and graphic programming interface according to a first map handle of the first original shared map; and secondly, the second rendering process opens a first target sharing map based on the first access right, and reads the original picture rendering data from the first target sharing map through a second multimedia and graphic programming interface. Specifically, in the actual application process, after the first target sharing map is obtained, the first rendering process notifies the second rendering process that the first target sharing map is rendered; when the second rendering process receives the notification, the original frame rendering data can be read from the first target sharing map.
And secondly, after the original picture rendering data is read, the original picture rendering data can be subjected to post-processing, and then target picture rendering data is obtained. Specifically, the specific generation process of the target picture rendering data can be realized by the following modes: firstly, a second rendering process performs post-processing on original material data in the original picture rendering data based on a second multimedia and graphic programming interface to obtain target material data; and secondly, the second rendering process renders the original object model in the original picture rendering data according to the target material data to obtain target picture rendering data. That is, the post-processing can be performed on the original picture rendering data based on the second rendering process so as to obtain the target picture rendering data; among them, the post-processing described herein may include, but is not limited to, gaussian blur processing, bilateral blur processing, foreground blur processing, edge detection, gamma correction, and the like, and the present example is not particularly limited thereto; meanwhile, the post-processing may further include processing the thickness and displacement deformation of the corresponding object model, and the like, which is not particularly limited in this example.
In an example embodiment, the post-processing of the original texture data in the original image rendering data to obtain the target texture data may be implemented as follows: firstly, world space vertex normal data and world position offset data of the original object model are obtained; secondly, determining a first model parameter, a second model parameter and a third model parameter according to the first target sharing map, world space vertex normal data and world position offset data; the first model parameters are used for representing color values of the original object model, the second model parameters are used for representing thickness scaling degree values of the original object model, and the third model parameters are used for representing displacement deformation degree values of the original object model; and finally, adjusting the original color value, the original shape thickness scaling degree and the original displacement variation degree in the original material data according to the first model parameter, the second model parameter and the third model parameter to obtain target material data. It should be noted that, in the actual application process, whether the thickness scaling and displacement deformation of the model need to be adjusted according to the world space vertex normal data and the world position offset data can be determined according to the actual needs; the world space vertex normal data and the world position offset data may be directly written into the second original shared map, or updated, which is not particularly limited in this example.
In an example embodiment, determining the first model parameter, the second model parameter, and the third model parameter from the first target shared map, world space vertex normal data, and world position offset data may be implemented as follows: firstly, carrying out Gaussian blur processing on the first target sharing map to obtain a first model parameter; secondly, calculating a product between the world space vertex normal data and a first preset scaling factor in response to assignment operation for the world space vertex normal data to obtain a second model parameter; and finally, responding to the assignment operation of the world position offset data, and carrying out displacement processing on the product operation result through the world position offset data after the assignment operation to obtain a third model parameter. Namely, the input can be carried out according to actual needs, and the desired effect is achieved.
In an exemplary embodiment, the gaussian blur processing is performed on the first target shared map to obtain the first model parameter, which may be implemented in the following manner: firstly, acquiring a current pixel point included in the first target sharing map, and randomly selecting one pixel point from the current pixel point as a target pixel point; secondly, matching adjacent pixel points of the target pixel point from the current pixel point according to the target pixel point position of the target pixel point in the first target shared map; then, a first original color value of the adjacent pixel points is obtained, and a first target color value of the target pixel point is calculated according to the first original color value; further, replacing the first original color value of the target pixel point based on the first target color value, traversing the current pixel point, and calculating the first target color values of all the current pixel points; and finally, obtaining a first model parameter based on the first target color value of the current pixel point. Specifically, referring to fig. 4, taking the target pixel point 401 as an example, the corresponding adjacent pixel points may include 402, 403, 404, 405, 406, 407, 408, 409, and so on; here, 8 adjacent pixels are taken as an example for explanation; in the practical application process, the number of the required adjacent pixel points can be selected according to practical needs, and the example is not particularly limited.
In an example embodiment, calculating the first target color value of the target pixel point from the first original color value may be implemented as follows: firstly, according to the position difference value between the adjacent pixel point and the target pixel point, configuring a weight value for a first original color value corresponding to the adjacent pixel point; and secondly, carrying out weighted summation on the weight value and the first original color value to obtain the first target color value. Specifically, since the map is composed of a two-dimensional pixel lattice; therefore, assuming that the coordinates of the target pixel position of the target pixel point at a certain position in the first target shared map are (x, y) and the corresponding color value is C (x, y) in the two-dimensional coordinates, weighted summation is performed according to the position and the color values of the 8 adjacent positions, and the final color value C' (x, y); the specific calculation process of C' (x, y) can be shown in the following formula (1):
c' (x, y) =p1×c1+p2×c2+p3×c3+p4×c4+p5×c+p6×c5+p7×c6+p8×c7+p9×c8; formula (1)
Wherein p1, p2, p3, p4, p6, p7, p8 and p9 are weight values of each adjacent pixel point, p5 is weight value of the target pixel point, C is the target pixel point, and C1-C8 are adjacent pixel points; meanwhile, the positions of C1-C8 relative to C are respectively: C1→C (x-1, y-1); C2→C (x, y-1); C3→C (x+1, y-1); C4→C (x-1, y); C5→C (x+1, y); C6→C (x-1, y+1); C7→C (x, y+1); C8→C (x+1, y+1).
Furthermore, in the practical application process, specific values of p1, p2, p3, p4, p6, p7, p8 and p9 can be determined according to specific distances between the adjacent pixel point and the target pixel point; for example, in the present example embodiment, p1= 0.0453, p2=0.0566, p3= 0.0453, p4=0.0566, p6=0.0566, p7= 0.0453, p8=0.0566, p9= 0.0453; further values of p5 are 1-p1-p2-p3-p4-p6-p 7-p8= 0.7056.
So far, the specific post-processing procedure has been fully implemented. The scene display obtained by gaussian blur is specifically shown in fig. 5.
In step S130, the second rendering process renders the target frame rendering data to a second original shared map corresponding to the second map handle according to the second map handle, so as to obtain a second target shared map.
Specifically, in the actual application process, the second rendering process renders the target image rendering data to a second original shared map corresponding to the second map handle according to the second map handle, so as to obtain a second target shared map, which can be implemented in the following manner: firstly, a second rendering process obtains a second access right item to the shared memory area based on a second multimedia and graphic programming interface according to a second mapping handle; and secondly, the second rendering process opens a second original shared map based on a second access right, and submits target picture rendering data to a graphic processing unit through a second multimedia and graphic programming interface, and the graphic processing unit controls a preset shader program to store the target picture rendering data into the second original shared map to obtain the second target shared map. Specifically, in the actual application process, after the second rendering process completes the post-processing process, the target full rendering data may be written into the second original shared map to obtain the second target shared map, and the specific rendering process is finished through the first rendering process through the shared memory program interface (i.e. the shared memory API of the Window operating system).
In step S140, the first rendering process reads the target frame rendering data in the second target shared map, and renders the target frame rendering data.
Specifically, in the actual application process, the first rendering process reads the target picture rendering data in the second target sharing map, and renders the target picture rendering data, which can be implemented by the following manner: the first rendering process reads target picture rendering data from the second target sharing map based on the first media and graphic programming interface, and calls the first media and graphic programming interface to draw the target picture rendering data onto the display interface. It should be noted that after the first rendering process reads the target frame rendering data, if other calculations need to be performed, additional calculations related to rendering may also be performed before rendering, and finally, the first media and graphics programming interface is called to draw the target frame rendering data to the display interface.
Hereinafter, a method of rendering a scene picture according to an exemplary embodiment of the present disclosure will be further explained and illustrated with reference to fig. 6. Specifically, referring to fig. 6, the method for rendering the scene may include the following steps:
Step S601, in the process A, creating a batch of maps which can be used for cross-process sharing through a DirectX API, and obtaining handles of the maps; wherein, a part of the mapping is taken as input and represents mapping data transferred from the process A to the process B; another part of the mapping data can be used as output and represents mapping data transferred from the process B to the process A;
step S602, in the process A, creating a shared memory area which can be accessed across processes through an API of a Window operating system;
step S603, in process B, obtaining access rights to the memory area through a shared memory API of the Window operating system;
step S604, in the process A, storing the handles of all the sharing maps into the sharing memory;
step S605, in the process B, reading all sharing map handles, opening the sharing map by using a DirectX API, and obtaining the access right to the sharing map;
step S606, in the process A, according to the effect to be realized, performing rendering-related calculation, and writing the result into the sharing map as input;
step S607, in the process B, performing rendering-related computation on the basis of the shared map content as input according to the effect to be achieved, and writing the result into the shared map as output;
In step S608, in the process a, the shared map content as output is read, additional rendering-related calculations are performed according to the effect to be achieved later, and the DirectX API is called to draw the map to the screen.
In the method for rendering the scene picture, the problems that the code logic of the game engine used by the process A needs to be modified, the workload is high and logic loopholes are easy to occur in the conventional mode can be solved; meanwhile, in the method for rendering the scene picture described in the exemplary embodiment of the present disclosure, the game engine used by the process B may be selected, so that the DirectX version used by the process B is updated compared with that used by the process a, so that some higher-level rendering effects can be implemented for the process a by using the process B, without any modification to the process a, which is fast and not prone to error.
The example embodiment of the disclosure also provides a rendering device of the scene picture. Specifically, referring to fig. 7, the scene rendering apparatus may include a first data rendering module 710, a post-processing module 720, a second data rendering module 730, and a scene rendering module 740. Wherein:
the first data rendering module 710 may be configured to render, by a first rendering process, the original image rendering data to a first original shared map, to obtain a first target shared map;
The post-processing module 720 may be configured to read, by a second rendering process, original image rendering data in the first target shared map according to a first map handle of the first original shared map, and perform post-processing on the original image rendering data to obtain target image rendering data;
the second data rendering module 730 may be configured to render, by using a second rendering process, the target frame rendering data to a second original shared map corresponding to a second map handle according to the second map handle, so as to obtain a second target shared map;
the scene rendering module 740 may be configured to read, by a first rendering process, target scene rendering data in the second target shared map, and render the target scene rendering data.
In an exemplary embodiment of the present disclosure, the rendering apparatus of a scene picture further includes:
the shared memory area creating module can be used for creating a shared memory area through a first rendering process through a preset operating system program interface, and creating a first original shared map and a second original shared map through a preset first multimedia and graphic programming interface;
The map handle configuration module may be configured to configure a first map handle for the first original shared map and a second map handle for a second original shared map through the first rendering process;
the shared map placement module may be configured to place a first original shared map, a second original shared map, a first map handle, and a second map handle into the shared memory region through the first rendering process;
the map handle obtaining module may be configured to obtain, by using the second rendering process, a first map handle of a first original shared map and a second map handle of a second original shared map from the shared memory area based on a preset shared memory program interface.
In an exemplary embodiment of the present disclosure, a first rendering process renders original picture rendering data into a first original shared map, resulting in a first target shared map, including: the first rendering process takes a first original shared map as a first output target through a first multimedia and image programming interface, and obtains the current scene position of a virtual lens associated with the first output target in a stereoscopic space; the first rendering process calculates a scene picture to be rendered, which can be shot by the virtual lens at the current scene position, and acquires a virtual object included in the scene picture to be rendered; the scene picture to be rendered comprises one or more virtual objects, wherein the virtual objects comprise at least one of virtual characters, virtual animals and virtual scenes; the first rendering process obtains an original object model and original material data of the virtual object, and obtains the original picture rendering data according to the original object model and the original material data; the first rendering process submits the original picture rendering data to a graphic processing unit through a first multimedia and image programming interface, and the graphic processing unit controls a preset shader program to store the original picture rendering data into a first original sharing map to obtain a first target sharing map.
In an exemplary embodiment of the present disclosure, the second rendering process reads the original picture rendering data in the first target shared map according to the first map handle of the first original shared map, including: the second rendering process obtains a first access right to the shared memory area based on a preset second multimedia and graphic programming interface according to a first map handle of the first original shared map; and the second rendering process opens a first target sharing map based on the first access right, and reads the original picture rendering data from the first target sharing map through a second multimedia and graphic programming interface.
In an exemplary embodiment of the present disclosure, the second rendering process renders the target frame rendering data to a second original shared map corresponding to a second map handle according to the second map handle, to obtain a second target shared map, including: the second rendering process obtains a second access right item to the shared memory area based on a second multimedia and graphics programming interface according to a second map handle; and the second rendering process opens a second original shared map based on a second access right, submits target picture rendering data to a graphic processing unit through a second multimedia and graphic programming interface, and controls a preset shader program to store the target picture rendering data into the second original shared map to obtain the second target shared map.
In an exemplary embodiment of the present disclosure, post-processing the original picture rendering data to obtain target picture rendering data includes: the second rendering process performs post-processing on the original material data in the original picture rendering data based on a second multimedia and graphic programming interface to obtain target material data; and the second rendering process renders the original object model in the original picture rendering data according to the target material data to obtain target picture rendering data.
In an exemplary embodiment of the present disclosure, post-processing the original picture rendering data to obtain target picture rendering data includes: the second rendering process performs post-processing on the original material data in the original picture rendering data based on a second multimedia and graphic programming interface to obtain target material data; and the second rendering process renders the original object model in the original picture rendering data according to the target material data to obtain target picture rendering data.
In an exemplary embodiment of the present disclosure, post-processing original texture data in the original image rendering data to obtain target texture data includes: acquiring world space vertex normal data and world position offset data of the original object model; determining a first model parameter, a second model parameter and a third model parameter according to the first target sharing map, world space vertex normal data and world position offset data; the first model parameters are used for representing color values of the original object model, the second model parameters are used for representing thickness scaling degree values of the original object model, and the third model parameters are used for representing displacement deformation degree values of the original object model; and adjusting the original color value, the original shape thickness scaling degree and the original displacement variation degree in the original material data according to the first model parameter, the second model parameter and the third model parameter to obtain target material data.
In one exemplary embodiment of the present disclosure, determining a first model parameter, a second model parameter, and a third model parameter from the first target shared map, world space vertex normal data, and world position offset data includes: carrying out Gaussian blur processing on the first target sharing map to obtain a first model parameter; calculating the product between the world space vertex normal data and a first preset scaling factor in response to the assignment operation for the world space vertex normal data to obtain a second model parameter; and responding to the assignment operation of the world position offset data, and carrying out displacement processing on the product operation result through the world position offset data after the assignment operation to obtain a third model parameter.
In an exemplary embodiment of the present disclosure, performing gaussian blur processing on the first target-sharing map to obtain a first model parameter, including: acquiring a current pixel point included in the first target sharing map, and randomly selecting one pixel point from the current pixel point as a target pixel point; according to the target pixel position of the target pixel in the first target sharing map, matching adjacent pixel points of the target pixel from the current pixel; acquiring a first original color value of the adjacent pixel points, and calculating a first target color value of the target pixel point according to the first original color value; replacing the first original color value of the target pixel point based on the first target color value, traversing the current pixel point, and calculating the first target color values of all the current pixel points; and obtaining a first model parameter based on the first target color value of the current pixel point.
In an exemplary embodiment of the present disclosure, calculating a first target color value of the target pixel point according to the first original color value includes: according to the position difference value between the adjacent pixel point and the target pixel point, configuring a weight value for a first original color value corresponding to the adjacent pixel point; and carrying out weighted summation on the weight value and the first original color value to obtain the first target color value.
In an exemplary embodiment of the present disclosure, calculating a first target color value of the target pixel point according to the first original color value includes: according to the position difference value between the adjacent pixel point and the target pixel point, configuring a weight value for a first original color value corresponding to the adjacent pixel point; and carrying out weighted summation on the weight value and the first original color value to obtain the first target color value.
In an exemplary embodiment of the present disclosure, a first rendering process reads target picture rendering data in the second target shared map and renders the target picture rendering data, including: the first rendering process reads target picture rendering data from the second target sharing map based on the first media and graphic programming interface, and calls the first media and graphic programming interface to draw the target picture rendering data onto the display interface.
In an exemplary embodiment of the present disclosure, the first rendering process is a primary rendering process where the game engine is located, and the second rendering process is an auxiliary rendering process; the first rendering process is provided with a first media and graphic programming interface, and the second rendering process is provided with a second media and graphic programming interface; the second interface version of the second media and graphics programming interface is higher than the first interface version of the first media and graphics programming interface.
The specific details of each module in the above-mentioned rendering device of the scene image are already described in detail in the corresponding rendering method of the scene image, so that they will not be described here again.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to such an embodiment of the present disclosure is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one storage unit 820, a bus 830 connecting the different system components (including the storage unit 820 and the processing unit 810), and a display unit 840.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the present specification. For example, the processing unit 810 may perform step S110 as shown in fig. 1: the first rendering process renders the original picture rendering data into a first original shared map to obtain a first target shared map; step S120: the second rendering process reads the original picture rendering data in the first target shared map according to the first map handle of the first original shared map, and performs post-processing on the original picture rendering data to obtain target picture rendering data; step S130: the second rendering process renders the target picture rendering data into a second original shared map corresponding to the second map handle according to the second map handle to obtain a second target shared map; step S140: and the first rendering process reads the target picture rendering data in the second target sharing map and renders the target picture rendering data.
The storage unit 820 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 8201 and/or cache memory 8202, and may further include Read Only Memory (ROM) 8203.
Storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 800, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 880. As shown, network adapter 880 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present disclosure may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (15)

1. A method of rendering a scene, comprising:
the first rendering process renders the original picture rendering data into a first original shared map to obtain a first target shared map;
the second rendering process reads the original picture rendering data in the first target shared map according to the first map handle of the first original shared map, and performs post-processing on the original picture rendering data to obtain target picture rendering data;
The second rendering process renders the target picture rendering data into a second original shared map corresponding to the second map handle according to the second map handle to obtain a second target shared map;
and the first rendering process reads the target picture rendering data in the second target sharing map and renders the target picture rendering data.
2. The method of rendering a scene cut according to claim 1, wherein the method of rendering a scene cut further comprises:
the first rendering process creates a shared memory area through a preset operating system program interface, and creates a first original shared map and a second original shared map through a preset first multimedia and graphic programming interface;
the first rendering process configures a first map handle for the first original shared map and a second map handle for a second original shared map;
the first rendering process places a first original shared map, a second original shared map, a first map handle, and a second map handle into the shared memory region;
the second rendering process obtains a first map handle of the first original shared map and a second map handle of the second original shared map from the shared memory area based on a preset shared memory program interface.
3. The method for rendering a scene according to claim 2, wherein the first rendering process renders the original scene rendering data into a first original shared map to obtain a first target shared map, comprising:
the first rendering process takes a first original shared map as a first output target through a first multimedia and image programming interface, and obtains the current scene position of a virtual lens associated with the first output target in a stereoscopic space;
the first rendering process calculates a scene picture to be rendered, which can be shot by the virtual lens at the current scene position, and acquires a virtual object included in the scene picture to be rendered; the scene picture to be rendered comprises one or more virtual objects, wherein the virtual objects comprise at least one of virtual characters, virtual animals and virtual scenes;
the first rendering process obtains an original object model and original material data of the virtual object, and obtains the original picture rendering data according to the original object model and the original material data;
the first rendering process submits the original picture rendering data to a graphic processing unit through a first multimedia and image programming interface, and the graphic processing unit controls a preset shader program to store the original picture rendering data into a first original sharing map to obtain a first target sharing map.
4. The method of claim 2, wherein the second rendering process reads the original picture rendering data in the first target shared map according to the first map handle of the first original shared map, comprising:
the second rendering process obtains a first access right to the shared memory area based on a preset second multimedia and graphic programming interface according to a first map handle of the first original shared map; and the second rendering process opens a first target sharing map based on the first access right, and reads the original picture rendering data from the first target sharing map through a second multimedia and graphic programming interface.
5. The method according to claim 2, wherein the second rendering process renders the target picture rendering data to a second original shared map corresponding to a second map handle according to the second map handle, to obtain a second target shared map, comprising:
the second rendering process obtains a second access right item to the shared memory area based on a second multimedia and graphics programming interface according to a second map handle;
And the second rendering process opens a second original shared map based on a second access right, submits target picture rendering data to a graphic processing unit through a second multimedia and graphic programming interface, and controls a preset shader program to store the target picture rendering data into the second original shared map to obtain the second target shared map.
6. The method for rendering a scene picture according to claim 1, wherein post-processing the original picture rendering data to obtain target picture rendering data comprises:
the second rendering process performs post-processing on the original material data in the original picture rendering data based on a second multimedia and graphic programming interface to obtain target material data;
and the second rendering process renders the original object model in the original picture rendering data according to the target material data to obtain target picture rendering data.
7. The method for rendering a scene according to claim 6, wherein post-processing the original texture data in the original scene rendering data to obtain target texture data comprises:
Acquiring world space vertex normal data and world position offset data of the original object model;
determining a first model parameter, a second model parameter and a third model parameter according to the first target sharing map, world space vertex normal data and world position offset data; the first model parameters are used for representing color values of the original object model, the second model parameters are used for representing thickness scaling degree values of the original object model, and the third model parameters are used for representing displacement deformation degree values of the original object model;
and adjusting the original color value, the original shape thickness scaling degree and the original displacement variation degree in the original material data according to the first model parameter, the second model parameter and the third model parameter to obtain target material data.
8. The method of rendering a scene graph according to claim 7, wherein determining a first model parameter, a second model parameter, and a third model parameter from the first target shared map, world space vertex normal data, and world position offset data comprises:
carrying out Gaussian blur processing on the first target sharing map to obtain a first model parameter;
Calculating the product between the world space vertex normal data and a first preset scaling factor in response to the assignment operation for the world space vertex normal data to obtain a second model parameter;
and responding to the assignment operation of the world position offset data, and carrying out displacement processing on the product operation result through the world position offset data after the assignment operation to obtain a third model parameter.
9. The method for rendering a scene according to claim 8, wherein performing gaussian blur processing on the first target shared map to obtain a first model parameter comprises:
acquiring a current pixel point included in the first target sharing map, and randomly selecting one pixel point from the current pixel point as a target pixel point;
according to the target pixel position of the target pixel in the first target sharing map, matching adjacent pixel points of the target pixel from the current pixel;
acquiring a first original color value of the adjacent pixel points, and calculating a first target color value of the target pixel point according to the first original color value;
replacing the first original color value of the target pixel point based on the first target color value, traversing the current pixel point, and calculating the first target color values of all the current pixel points;
And obtaining a first model parameter based on the first target color value of the current pixel point.
10. The method of rendering a scene picture according to claim 9, wherein calculating a first target color value for the target pixel from the first original color value comprises:
according to the position difference value between the adjacent pixel point and the target pixel point, configuring a weight value for a first original color value corresponding to the adjacent pixel point;
and carrying out weighted summation on the weight value and the first original color value to obtain the first target color value.
11. The method of rendering a scene from claim 1, wherein a first rendering process reads target scene rendering data in the second target shared map and renders the target scene rendering data, comprising:
the first rendering process reads target picture rendering data from the second target sharing map based on the first media and graphic programming interface, and calls the first media and graphic programming interface to draw the target picture rendering data onto the display interface.
12. The method of claim 1, wherein the first rendering process is a main rendering process where the game engine is located, and the second rendering process is an auxiliary rendering process;
The first rendering process is provided with a first media and graphic programming interface, and the second rendering process is provided with a second media and graphic programming interface;
the second interface version of the second media and graphics programming interface is higher than the first interface version of the first media and graphics programming interface.
13. A scene picture rendering apparatus, comprising:
the first data rendering module is used for rendering the original picture rendering data into a first original shared map through a first rendering process to obtain a first target shared map;
the post-processing module is used for reading the original picture rendering data in the first target shared map according to the first map handle of the first original shared map through a second rendering process, and carrying out post-processing on the original picture rendering data to obtain target picture rendering data;
the second data rendering module is used for rendering the target picture rendering data into a second original shared map corresponding to a second map handle through a second rendering process according to the second map handle to obtain a second target shared map;
and the scene picture rendering module is used for reading the target picture rendering data in the second target sharing map through a first rendering process and rendering the target picture rendering data.
14. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the method of rendering a scene picture according to any of claims 1-12.
15. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of rendering a scene picture according to any of claims 1-12 via execution of the executable instructions.
CN202310874434.3A 2023-07-14 2023-07-14 Scene picture rendering method and device, storage medium and electronic equipment Pending CN116912379A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310874434.3A CN116912379A (en) 2023-07-14 2023-07-14 Scene picture rendering method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310874434.3A CN116912379A (en) 2023-07-14 2023-07-14 Scene picture rendering method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116912379A true CN116912379A (en) 2023-10-20

Family

ID=88367694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310874434.3A Pending CN116912379A (en) 2023-07-14 2023-07-14 Scene picture rendering method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116912379A (en)

Similar Documents

Publication Publication Date Title
KR102475212B1 (en) Foveated rendering in tiled architectures
EP2710559B1 (en) Rendering mode selection in graphics processing units
CN105741228A (en) Graph processing method and device
CN109448137B (en) Interaction method, interaction device, electronic equipment and storage medium
US20230120253A1 (en) Method and apparatus for generating virtual character, electronic device and readable storage medium
JP2006120158A (en) Method for hardware accelerated anti-aliasing in three-dimension
CN110555900B (en) Rendering instruction processing method and device, storage medium and electronic equipment
US20200219322A1 (en) Snapping, virtual inking, and accessibility in augmented reality
US7508390B1 (en) Method and system for implementing real time soft shadows using penumbra maps and occluder maps
US8907979B2 (en) Fast rendering of knockout groups using a depth buffer of a graphics processing unit
US8854368B1 (en) Point sprite rendering in a cross platform environment
US20030218610A1 (en) System and method for implementing shadows using pre-computed textures
CN109448123B (en) Model control method and device, storage medium and electronic equipment
US8004515B1 (en) Stereoscopic vertex shader override
US8462156B1 (en) Method and system for generating shadows in a graphics processing unit
US20040012587A1 (en) Method and system for forming an object proxy
US8203567B2 (en) Graphics processing method and apparatus implementing window system
WO2023173728A1 (en) Graphic rendering method and apparatus, and storage medium
CN107452046B (en) Texture processing method, device and equipment of three-dimensional city model and readable medium
US6999080B1 (en) System, method, and computer program product for general environment mapping
US20230281906A1 (en) Motion vector optimization for multiple refractive and reflective interfaces
CN116912379A (en) Scene picture rendering method and device, storage medium and electronic equipment
US10657705B2 (en) System and method for rendering shadows for a virtual environment
US11348287B2 (en) Rendering of graphic objects with pattern paint using a graphics processing unit
US10636210B2 (en) Dynamic contour volume deformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination