CN115496845A - Image rendering method and device, electronic equipment and storage medium - Google Patents

Image rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115496845A
CN115496845A CN202211067613.8A CN202211067613A CN115496845A CN 115496845 A CN115496845 A CN 115496845A CN 202211067613 A CN202211067613 A CN 202211067613A CN 115496845 A CN115496845 A CN 115496845A
Authority
CN
China
Prior art keywords
rendering
image
target
primitive
attribute information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211067613.8A
Other languages
Chinese (zh)
Inventor
饶超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202211067613.8A priority Critical patent/CN115496845A/en
Publication of CN115496845A publication Critical patent/CN115496845A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Abstract

The present disclosure relates to an image rendering method, an image rendering device, an electronic device and a storage medium, and relates to the field of computer graphics, wherein the method comprises the following steps: the method comprises the steps that a terminal obtains data to be rendered, wherein the data to be rendered comprises a primitive list; the terminal performs rasterization rendering on data to be rendered to obtain a first rendering image; the terminal sends data to be rendered to a server, the server determines target data from the data to be rendered, the target data comprises at least one target primitive in a primitive list, and the target primitive meets a preset category condition; the server carries out real-time ray tracing on the target data to obtain a second rendering image and sends the second rendering image to the terminal; and the terminal synthesizes the first rendering image and the second rendering image to obtain a target rendering image. By utilizing the technical scheme provided by the embodiment of the disclosure, high-quality rendering images can be efficiently provided on the mobile terminal, and both the rendering efficiency and the rendering quality are considered.

Description

Image rendering method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer graphics, and in particular, to an image rendering method and apparatus, an electronic device, and a storage medium.
Background
With the continuous emergence of numerous image applications such as games, short video, virtual reality, augmented reality, and meta universe, three-dimensional image rendering technology plays an increasingly important role as one of the underlying basic capabilities of these applications. Generally, the rendering quality and rendering efficiency for three-dimensional images are contradictory, with higher rendering quality meaning lower efficiency and vice versa.
In the related art, in a mobile terminal with low performance, rendering quality is often reduced to some extent in order to achieve high efficiency, but nowadays, a user has an increasingly high requirement for image quality. Therefore, as mobile terminals become more popular and image applications are increasingly popular, how to efficiently provide high-quality rendered images becomes an urgent problem to be solved.
Disclosure of Invention
The present disclosure provides an image rendering method, apparatus, electronic device, and storage medium to at least solve the problem in the related art that it is difficult to efficiently provide a high-quality rendered image on a mobile terminal. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an image rendering method applied to a terminal, including:
acquiring data to be rendered; the data to be rendered comprises a primitive list;
performing rasterization rendering on the data to be rendered to obtain a first rendering image;
sending the data to be rendered to a server, and acquiring a second rendered image returned by the server; the second rendering image is obtained by the server performing real-time ray tracing on target data in the data to be rendered; the target data comprises at least one target primitive in the primitive list; the target graphic primitive meets a preset category condition;
and synthesizing the first rendering image and the second rendering image to obtain a target rendering image.
Optionally, the synthesizing the first rendering image and the second rendering image to obtain a target rendering image includes:
traversing each second pixel point in the second rendering image, and determining a current first pixel point corresponding to the currently traversed second pixel point in the first rendering image;
determining first pixel attribute information of the current first pixel point and second pixel attribute information of the current traversal second pixel point;
determining target pixel attribute information of a current target pixel point according to the first pixel attribute information and the second pixel attribute information;
generating the target rendering image according to the target pixel attribute information of each current target pixel point; and the current target pixel point is a pixel point in the target rendering image, and the pixel point has the same pixel coordinate with the current traversal second pixel point and the current first pixel point.
Optionally, the determining, according to the first pixel attribute information and the second pixel attribute information, target pixel attribute information of a current target pixel point includes:
and under the condition that the second pixel attribute information indicates that the currently traversed second pixel point is a shadow pixel point, multiplying the first pixel attribute information by the second pixel attribute information to obtain the target pixel attribute information of the current target pixel point.
Optionally, the determining, according to the first pixel attribute information and the second pixel attribute information, target pixel attribute information of a target pixel point further includes:
and under the condition that the second pixel attribute information indicates that the currently traversed second pixel point is an illumination pixel point, determining the average value of the first pixel attribute information and the second pixel attribute information as the target pixel attribute information of the current target pixel point.
Optionally, the determining, according to the first pixel attribute information and the second pixel attribute information, target pixel attribute information of a current target pixel point further includes:
determining a first weight corresponding to the first pixel attribute information and a second weight corresponding to the second pixel attribute information according to the primitive material information corresponding to the currently traversed second pixel;
determining a weighted average of the first pixel attribute information and the second pixel attribute information as the target pixel attribute information of the current target pixel point based on the first weight and the second weight.
Optionally, the performing rasterization rendering on the data to be rendered to obtain a first rendered image includes:
determining a fragment corresponding to each of the primitives in the primitive list; the fragment is a pixel area in the two-dimensional image;
traversing each fragment, and determining first pixel attribute information of a first pixel point in each fragment;
and generating a first rendering image according to the first pixel attribute information of the first pixel point in each fragment.
According to a second aspect of the embodiments of the present disclosure, there is provided an image rendering method applied to a server, including:
acquiring data to be rendered sent by a terminal; the data to be rendered comprises a primitive list;
determining target data from the data to be rendered; the target data comprises at least one target primitive in the primitive list; the target graphic primitive meets a preset category condition;
performing real-time ray tracing on the target data to obtain a second rendering image;
sending the second rendering image to the terminal so that the terminal synthesizes the first rendering image and the second rendering image to obtain a target rendering image; the first rendering image is obtained by performing rasterization rendering on the data to be rendered by the terminal.
Optionally, the determining target data from the data to be rendered includes:
determining primitive material information of each primitive in the primitive list;
and taking the primitive as the target primitive under the condition that the primitive material information indicates that the primitive meets the preset category condition.
Optionally, the determining target data from the data to be rendered further includes:
determining primitive depth information of each primitive in the primitive list;
and taking the primitive as the target primitive under the condition that the primitive depth information indicates that the primitive meets the preset category condition.
Optionally, the performing real-time ray tracing on the target data to obtain a second rendered image includes:
acquiring an initial two-dimensional image, wherein the two-dimensional image comprises at least one second pixel point;
projecting simulation light by taking a viewpoint as a starting point and taking the connecting direction of the viewpoint and the second pixel point as a light direction;
performing ray tracing according to the sub-surface scattering phenomenon of the simulated rays in the at least one target primitive, and determining second pixel attribute information of a second pixel point;
and generating the second rendering image according to the second pixel attribute information of each second pixel point in the two-dimensional image.
According to a third aspect of the embodiments of the present disclosure, there is provided an image rendering apparatus applied to a terminal, including:
a first acquisition module configured to perform acquisition of data to be rendered; the data to be rendered comprises a primitive list;
the first rendering module is configured to perform rasterization rendering on the data to be rendered to obtain a first rendering image;
the first sending module is configured to send the data to be rendered to a server and acquire a second rendered image returned by the server; the second rendering image is obtained by the server performing real-time ray tracing on target data in the data to be rendered; the target data comprises at least one target primitive in the primitive list; the target graphic primitive meets a preset category condition;
and the synthesis module is configured to synthesize the first rendering image and the second rendering image to obtain a target rendering image.
Optionally, the synthesis module includes:
the traversal unit is configured to execute traversal of each second pixel point in the second rendering image and determine a current first pixel point corresponding to the current traversal second pixel point in the first rendering image;
a pixel attribute information determination unit configured to perform determining first pixel attribute information of the current first pixel point and second pixel attribute information of the current traversal second pixel point;
a target pixel attribute information determining unit configured to determine target pixel attribute information of a current target pixel point according to the first pixel attribute information and the second pixel attribute information;
a target rendering image generating unit configured to generate the target rendering image according to target pixel attribute information of each of the current target pixel points; the current target pixel point is a pixel point in the target rendering image, and the pixel point is the same as the pixel coordinates of the current traversal second pixel point and the current first pixel point.
Optionally, the target pixel attribute information determining unit includes:
and the multiplication subunit is configured to multiply the first pixel attribute information and the second pixel attribute information to obtain the target pixel attribute information of the current target pixel point under the condition that the second pixel attribute information indicates that the currently traversed second pixel point is a shadow pixel point.
Optionally, the target pixel attribute information determining unit further includes:
an averaging subunit configured to determine, when the second pixel attribute information indicates that the currently traversed second pixel point is an illumination-type pixel point, an average of the first pixel attribute information and the second pixel attribute information as the target pixel attribute information of the current target pixel point.
Optionally, the target pixel attribute information determining unit further includes:
the weight determining subunit is configured to execute determining a first weight corresponding to the first pixel attribute information and a second weight corresponding to the second pixel attribute information according to the primitive material information corresponding to the currently traversed second pixel;
a weighted average subunit configured to perform determining a weighted average of the first pixel attribute information and the second pixel attribute information as the target pixel attribute information of the current target pixel point based on the first weight and the second weight.
Optionally, the first rendering module includes:
a primitive matching unit configured to perform determining a fragment corresponding to each primitive in the primitive list; the fragment is a pixel area in the two-dimensional image;
a first pixel attribute information determination unit configured to perform traversal of each of the fragments, and determine first pixel attribute information of a first pixel point in each of the fragments;
and the first rendering image generating unit is configured to generate a first rendering image according to the first pixel attribute information of the first pixel point in each fragment.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an image rendering apparatus applied to a server, including:
the second acquisition module is configured to execute acquisition of data to be rendered, which is sent by the terminal; the data to be rendered comprises a primitive list;
a data screening module configured to perform determining target data from the data to be rendered; the target data comprises at least one target primitive in the primitive list; the target graphic primitive meets a preset category condition;
the second rendering module is configured to perform real-time ray tracing on the target data to obtain a second rendering image;
the second sending module is configured to send the second rendering image to the terminal, so that the terminal synthesizes the first rendering image and the second rendering image to obtain a target rendering image; the first rendering image is obtained by performing rasterization rendering on the data to be rendered by the terminal.
Optionally, the data screening module includes:
a texture information determination unit configured to perform determination of primitive texture information of each primitive in the primitive list;
and the first class screening unit is configured to perform, when the primitive material information indicates that the primitive meets the preset class condition, taking the primitive as the target primitive.
Optionally, the data screening module further includes:
a depth information determination unit configured to perform determining primitive depth information for each primitive in the primitive list;
a second class screening unit configured to perform, in a case where the primitive depth information indicates that the primitive satisfies the preset class condition, regarding the primitive as the target primitive.
Optionally, the second rendering module includes:
an initial image acquisition unit configured to perform acquisition of an initial two-dimensional image, the two-dimensional image including at least one second pixel point;
the light ray projection unit is configured to project simulation light rays by taking a viewpoint as a starting point and taking a connecting line direction of the viewpoint and the second pixel point as a light ray direction;
the ray tracing unit is configured to perform ray tracing according to a sub-surface scattering phenomenon of the simulated ray in the at least one target primitive, and determine second pixel attribute information of a second pixel point;
and the second rendering unit is configured to generate the second rendering image according to the second pixel attribute information of each second pixel point in the two-dimensional image.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the image rendering method of any one of the first aspect or the second aspect of the embodiments of the disclosure.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform an image rendering method according to any one of the first or second aspects of the embodiments of the present disclosure.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by a processor, implement the image rendering method according to any one of the first or second aspects of embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the technical scheme provided by the embodiment of the disclosure adopts a terminal and server cooperative rendering mode, the terminal mainly performs rasterization rendering on data to be rendered to obtain a first rendered image, the server performs ray tracing on target data in the data to be rendered to obtain a second rendered image, wherein the target data comprises at least one target primitive in a primitive list in the data to be rendered, and the target primitive meets a preset category condition, and finally the terminal synthesizes the first rendered image and the second rendered image to obtain the target rendered image. In the technical scheme provided by the embodiment of the disclosure, the image rendering efficiency can be effectively improved through the rasterization rendering of the terminal, and the illumination effect of the image is enhanced through the real-time rendering based on ray tracing by the server, so that the rendering quality of the image synthesized by the terminal can be improved; in addition, the server does not perform ray tracing on all primitives in the data to be rendered, but screens and cuts out target primitives according to preset category conditions and performs ray tracing rendering on the target primitives, so that on one hand, the calculated amount of the server in the rendering process is reduced, the rendering duration is shortened, the rendering result of the server side can be provided for the terminal more quickly and in real time, on the other hand, a more flexible ray tracing rendering scheme can be realized according to the setting of the preset category conditions, on the other hand, the hardware requirement on the server is also reduced, and the larger-scale rendering requirement under a real-time scene can be met. Therefore, the technical scheme provided by the embodiment of the disclosure adopts a collaborative rendering mode of the terminal and the server, and effectively considers both the efficiency and the quality of image rendering, so that real-time and efficient image rendering experience and high-quality image rendering results can be provided at the terminal.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a diagram illustrating an environment for implementing a method of image rendering according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of image rendering according to an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a rasterized rendering in accordance with an illustrative embodiment;
FIG. 4 is a diagram illustrating a rasterized rendered image in accordance with an illustrative embodiment;
FIG. 5 is a schematic flow chart illustrating a process for determining target data according to an exemplary embodiment;
FIG. 6 is a schematic diagram of an image illustrating ray tracing in accordance with an exemplary embodiment;
FIG. 7 is a flow diagram illustrating ray tracing in accordance with an exemplary embodiment;
FIG. 8 is a schematic flow diagram illustrating image synthesis according to an exemplary embodiment;
FIG. 9 is a schematic flow diagram illustrating a pixel fusion in accordance with an exemplary embodiment;
FIG. 10 is a block diagram illustrating an image rendering apparatus according to an example embodiment;
FIG. 11 is a block diagram illustrating another image rendering apparatus according to an example embodiment;
FIG. 12 is a block diagram illustrating an electronic device for implementing an image rendering method in accordance with an exemplary embodiment;
FIG. 13 is a block diagram illustrating another electronic device for implementing a method of image rendering in accordance with an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
In order to facilitate understanding of the technical solutions and the technical effects thereof described in the embodiments of the present disclosure, the embodiments of the present disclosure explain related terms:
rasterization: rasterization is a process of pixelating primitives made up of vector vertices.
Ray tracing: ray Tracing, reverse tracking, tracking in the reverse direction of the Ray reaching the viewpoint, finding out the surface point of the object intersecting the sight line through each pixel on the screen, continuing tracking, finding out all light sources influencing the light intensity of the point, and calculating the accurate Ray intensity on the point.
Global illumination: refers to a rendering technique that takes into account both direct illumination from a light source in a scene and indirect illumination reflected by other objects in the scene.
Sub-surface scattering: sub-Surface-Scattering, referred to as 3S for short, describes the phenomenon of scattered illumination when light passes through a transparent or translucent Surface, which refers to the process of light transmission in which light enters an object from the Surface, is internally scattered, and then exits through other vertices of the object Surface.
Referring to fig. 1, a schematic diagram of an application environment of an image rendering method according to an exemplary embodiment is shown, where the application environment may include a terminal 110 and a server 120, and the terminal 110 and the server 120 may be connected through a wired network or a wireless network.
The terminal 110 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like. An Application (App for short) may be installed in the terminal 110, the Application may be an independent Application or a sub-program in the independent Application, and a user of the terminal 110 may log in the Application through pre-registered user information, where the user information may include an account and a password.
The server 120 may be a server that provides a background service for the application program in the terminal 110, may also be another server that is connected and communicated with the background server of the application program, may be one server, or may be a server cluster composed of multiple servers.
In the embodiment of the disclosure, the terminal 110 performs rasterization rendering on data to be rendered to obtain a first rendered image, the server 120 performs ray tracing on target data in the data to be rendered to obtain a second rendered image, where the target data includes at least one target primitive in a primitive list in the data to be rendered and the target primitive satisfies a preset category condition, and finally, the terminal 110 synthesizes the first rendered image and the second rendered image to obtain the target rendered image.
Specifically, after acquiring the data to be rendered including the primitive list, the terminal 110 performs rasterization rendering on the data to be rendered to obtain a first rendered image, and sends the data to be rendered to the server 120. The server 120 first determines target data from the data to be rendered, where the target data includes at least one target primitive in the primitive list; the target primitive meets the preset category condition, the server 120 performs real-time ray tracing on the target data to obtain a second rendering image, the server 120 sends the second rendering image to the terminal 110, and the terminal 110 synthesizes the first rendering image and the second rendering image to obtain the target rendering image. The image rendering efficiency can be effectively improved through the rasterization rendering of the terminal 110, and the illumination effect of the image is enhanced through the real-time rendering based on ray tracing by the server 120, so that the rendering quality of the image synthesized by the terminal can be improved; in addition, the server 120 does not perform ray tracing on all primitives in the data to be rendered, but filters and cuts out target primitives according to preset category conditions and performs ray tracing rendering on the target primitives, so that on one hand, the calculated amount of the server 120 in the rendering process is reduced, the rendering duration is shortened, and therefore the rendering result of the server 120 can be provided for the terminal 110 more quickly and in real time, on the one hand, a more flexible ray tracing rendering scheme can be realized according to the setting of the preset category conditions, on the other hand, the hardware requirement on the server 120 is also reduced, and the larger-scale rendering requirement under a real-time scene can be met. Therefore, by the real-time collaborative rendering mode, the efficiency and quality of image rendering are effectively considered, so that real-time and efficient image rendering experience and high-quality image rendering results can be provided at the terminal 110.
The scheme provided by the embodiment of the disclosure can be deployed at a cloud end, and further relates to a cloud technology and the like.
Cloud technology (Cloud technology): the cloud computing business model based management system is a management technology for unifying series resources such as hardware, software and networks in a wide area network or a local area network to realize calculation, storage, processing and sharing of data, can also be understood as a general term of a network technology, an information technology, an integration technology, a management platform technology, an application technology and the like applied based on a cloud computing business model, can form a resource pool, and is used as required, flexible and convenient. Background services of a technical network system require a large amount of computing and storage resources, such as video websites, picture websites and more portal websites, with the high development and application of the internet industry, each article in the future may have its own identification mark and needs to be transmitted to a background system for logic processing, data at different levels are processed separately, and data in various industries need strong system support, so that cloud computing is required as support in the cloud technology. Cloud computing is a computing model that distributes computing tasks across a resource pool formed by a large number of computers, enabling various application systems to obtain computing power, storage space, and information services as needed. The network that provides the resources is called the "cloud". Resources in the "cloud" appear to the user as being infinitely expandable and available at any time, available on demand, expandable at any time, and paid for on-demand. As a basic capability provider of cloud computing, a cloud computing resource pool platform, referred to as a cloud platform for short, generally referred to as Infrastructure as a Service (IaaS), is established, and multiple types of virtual resources are deployed in a resource pool and are selectively used by external clients. The cloud computing resource pool mainly comprises: a computing device (which may be a virtualized machine, including an operating system), a storage device, and a network device.
FIG. 2 is a flowchart illustrating a method of image rendering according to an exemplary embodiment, which may include the steps of, as shown in FIG. 2:
in step S201, the terminal acquires data to be rendered; the data to be rendered comprises a list of primitives.
In the embodiment of the disclosure, the data to be rendered includes a primitive list, where the primitive list includes at least one primitive and attribute information of each primitive, where the primitive is a basic graphic element in a three-dimensional image and may have a point, a straight line, a triangle, and the like, the attribute information of the primitive includes an index identifier of the primitive in the primitive list, topology information, a geometric object in the three-dimensional image corresponding to the primitive, a vertex combined into the primitive, attribute information of the vertex, and the like, and the attribute information of the vertex may include vertex coordinates, texture coordinates, a normal, color information, depth information, brightness information, and the like;
before step S201, the method provided in the embodiment of the present disclosure may further include: determining a vertex in a three-dimensional image to be rendered; and cutting the three-dimensional image according to the vertex to obtain a primitive list, wherein the primitive list comprises at least one primitive. That is, it can be understood that the data to be rendered in the embodiment of the present disclosure is not an original three-dimensional image to be rendered, but is a result obtained after processing such as vertex clipping and primitive assembling.
In step S202, the terminal performs rasterization rendering on the data to be rendered to obtain a first rendered image.
In the embodiment of the disclosure, the terminal performs rasterization rendering on data to be rendered, and can quickly obtain a first rendering image. Rasterization, also called scan conversion, is to disperse continuous primitives into pixels adjacent to each other in a two-dimensional image, and determine attribute information of the pixels, so as to obtain a first rendered image according to the attribute information of the pixels, where the first rendered image is a two-dimensional image with a preset resolution.
In an embodiment of the present disclosure, as shown in fig. 3, specifically, the primitive list includes at least one primitive, and step S202 may include the following steps:
in step S2021, determining a fragment corresponding to each primitive in the primitive list; a fragment is a region of pixels in a two-dimensional image.
And the graphics primitive is a basic graphics unit in the three-dimensional image, the graphics primitive is projected and mapped into the two-dimensional image, and the pixel area in the two-dimensional image covered by the projection is the fragment corresponding to the graphics primitive. As shown in fig. 4, a projection of the primitive a in the two-dimensional screen is a, and a projection of the primitive B in the two-dimensional screen is B, but a and B are not fragments, and a and B need to be discretized into pixels adjacent to each other, that is, the fragments correspond to a pixel point list.
Furthermore, the attribute information of the fragment corresponding to the primitive can be determined according to at least one vertex combined into the primitive. Specifically, the attribute information of the corresponding fragment may be obtained by performing interpolation according to the attribute information of the vertex of the primitive, and may include a normal direction, a vertex coordinate, a texture coordinate, an image space coordinate, and the like. In addition, the fragment includes the pixel coordinates of the two-dimensional image in addition to the attribute information obtained by interpolation, so as to represent the pixel area covered by the fragment cloud.
In step S2022, each fragment is traversed to determine first pixel attribute information of a first pixel point in each fragment.
Through scan conversion and interpolation, the pixel area covered by each primitive, namely the film cloud, can be obtained. In practical application, a fragment shader can be written according to actual rendering requirements, and operations such as fragment-by-fragment shading, depth testing, color mixing, template testing and the like are achieved, so that first pixel attribute information of each first pixel point in each fragment cloud is determined, and the first pixel attribute information can include pixel coordinates, color information (such as RGB values), depth values and the like of the pixel points. It should be noted that the first pixel point here is a pixel point in the two-dimensional image, and is distinguished from a pixel point in the second rendered image generated by rendering by the server.
In step S2023, a first rendered image is generated according to the first pixel attribute information of the first pixel point in each fragment.
Specifically, it is considered that a plurality of fragments cover the same first pixel point, at this time, the first pixel attribute information stored in the first pixel point may include a plurality of color information, a plurality of depth values, and the like, for example, each depth value in the plurality of depth values corresponds to one fragment respectively, the shielding condition of the plurality of fragments may be determined according to the size of the depth value, and then the plurality of depth values are subjected to accepting or blending calculation according to the shielding condition, so as to obtain the final depth value of the first pixel point, and the color information may also be obtained in the same manner, and then the first rendered image may be generated according to the final first pixel attribute information of each first pixel point in the two-dimensional image.
In the embodiment, the primitive is mapped into the fragment of the two-dimensional image, and the attribute information of each pixel point in the two-dimensional image is further determined, so that the first rendered image is obtained, that is, the rasterization rendering process is realized at the terminal, the rasterization rendering is substantially performed by taking the primitive (or a geometric object understood as a three-dimensional image) as the center, and the like, and the rendering speed is faster than the ray tracing rendering, so that the image rendering efficiency can be effectively improved.
In step S203, the data to be rendered is transmitted to the server.
It should be noted that step S203 may also be after step S201 and before step S202, and the embodiment of the present disclosure is only an example and is not a sole limitation to the implementation step.
In step S204, the server obtains data to be rendered sent by the terminal, and determines target data from the data to be rendered.
In the embodiment of the present disclosure, the primitive list includes at least one primitive, and the target data includes at least one target primitive in the primitive list, where the target primitive satisfies a preset category condition, that is, the target data is part or all of the data to be rendered. In addition, after the data to be rendered is screened and clipped, the target primitives in the obtained target data still follow the topological structure in the primitive list, and illustratively, the clipped target data are multiple primitives in a head region, where the positions of the target primitives corresponding to the eye portion in the three-dimensional space are the same as the positions of the primitives corresponding to the same eye portion in the primitive list in the three-dimensional space.
In an embodiment of the present disclosure, as shown in fig. 5, specifically, the primitive list includes at least one primitive, and step S204 may include the following steps:
in step S2041, primitive texture information for each primitive in the primitive list is determined.
The primitive material information represents the material texture corresponding to the primitive, such as leather, plastic, glass, skin, etc., and the refraction direction, transmittance, scattering direction, etc. of the incident light are different on the surfaces of different materials.
In step S2042, when the primitive material information indicates that the primitive satisfies the preset category condition, the primitive is taken as a target primitive.
The preset category condition may limit the material category of the target primitive, for example, a primitive of a skin material is screened out as the target primitive, so as to render the primitive of the skin material, improve the illumination effect of the skin area in the image, and for example, improve the brightness of the skin area.
In step S2043, primitive depth information for each primitive in the primitive list is determined.
According to the foregoing embodiment, the attribute information of the primitive may include depth information of the vertex, and the primitive depth information may be determined using a depth query algorithm, and the depth information may represent a distance between the primitive and a center of the camera.
In step S2044, in the case that the primitive depth information indicates that the primitive satisfies the preset category condition, the primitive is taken as the target primitive.
The preset category condition may define the target primitive as a light blocking category, for example, a glasses area where light is blocked, a face area where light is blocked by glasses, and a primitive corresponding to a hair area are screened out as target primitives, so as to render the target primitives, improve the illumination effect of the glasses area and the hair area in the image, and for example, improve the degree of distinguishing the illumination of the glasses area and the hair area from shadows. Specifically, it may be determined whether the primitive overlaps with other primitives based on the primitive depth information and in combination with the normal direction of the primitive.
In the above embodiment, the server does not render all the primitives in the data to be rendered, but filters the primitives, so that on one hand, the calculated amount of the server in the rendering process is reduced, the rendering duration is shortened, and the rendering result of the server side can be provided for the terminal more quickly and in real time, on one hand, a more flexible and targeted rendering scheme can be realized according to the setting of the preset category conditions, the rendering efficiency and the rendering quality are considered, on the other hand, the hardware requirement on the server is also reduced, and the larger-scale rendering requirement under the real-time scene can be met.
In step S205, the server performs real-time ray tracing on the target data to obtain a second rendered image.
In the embodiment of the disclosure, the server performs real-time ray tracing on the target data, and the obtained second rendering image effectively enhances the illumination effect of the image, so that the sense of reality of the rendering image can be improved. As shown in fig. 6, the rendering method based on ray tracing mainly includes ray casting, ray tracing and rendering, that is, after a principal ray is cast to a two-dimensional image plane, the reflection, refraction, scattering of the light and the paths of multiple bounces of the light in the scene (such as secondary rays and shadow rays in fig. 6) are traced, and the light energy accumulated by the light source at the point can be used as the attribute information of the pixel point projecting the light in the two-dimensional image, that is, the rendering result can include color information, brightness information, depth information, and the like of the pixel point. The rendering result based on ray tracing has a global illumination effect, but has a high requirement on hardware performance and is difficult to run in real time on the mobile terminal, so the embodiment of the disclosure utilizes the server to assist in completing the real-time ray tracing rendering of the target data. In one embodiment of the present disclosure, the second rendered image is a two-dimensional image of the same resolution as the first rendered image.
In one embodiment of the present disclosure, as shown in fig. 7, for establishing a ray tracing model for the sub-surface reflection phenomenon of the ray, step S205 may include the following steps:
in step S2051, an initial two-dimensional image is obtained, where the two-dimensional image includes at least one second pixel point.
The initial two-dimensional image is a blank two-dimensional image used, and the information of the second pixel point is included as a default.
In step S2052, a simulated ray is projected with the viewpoint as a starting point and a connection direction between the viewpoint and the second pixel point as a ray direction.
The viewpoint is the observer's perspective, and can also be understood as the camera center.
The steps S2051 and S2052 can refer to the schematic diagram of ray casting in fig. 6.
In step S2053, ray tracing is performed according to the sub-surface scattering phenomenon of the simulated ray in the at least one target primitive, and second pixel attribute information of a second pixel point is determined.
In this embodiment, ray tracing is performed for the sub-surface scattering phenomenon, for example, when the primitive of the skin material receives incident rays, the rays may pass through the primitive of the skin material and be emitted from other primitives of the face part, that is, scattering occurs, and ray tracing for the sub-surface scattering is performed for the primitive of the face area, so that the illumination effect of the face area can be significantly improved, and the sense of reality of the face area is improved. For the primitives shielded by the obvious shadow, such as hair and human face, the reality of illumination and shadow comparison can be effectively improved through a rendering mode based on ray tracing, and the image rendering quality is improved.
In step S2054, a second rendered image is generated according to the second pixel attribute information of each second pixel point in the two-dimensional image.
In the embodiment, the ray tracing model is established in a targeted manner aiming at the subsurface scattering phenomenon of different materials, so that the illumination effect of the different materials is improved, and the image rendering quality can be improved in a targeted manner. That is, the embodiment of the present disclosure provides a more flexible and more rendering optimization scheme, and can also meet the efficiency requirement of real-time rendering.
In step S206, the server transmits the second rendered image to the terminal.
In step S207, the terminal synthesizes the first rendering image and the second rendering image to obtain a target rendering image.
In the embodiment of the disclosure, the first rendering image and the second rendering image are synthesized by the terminal to obtain the target rendering image, and the efficiency and quality of image rendering are effectively considered, so that real-time and efficient image rendering experience and high-quality image rendering results can be provided at the terminal. The target rendered image may be a newly generated two-dimensional image or may be an optimization of the first rendered image based on the second rendered image.
In an embodiment of the present disclosure, as shown in fig. 8, the resolutions of the first rendering image and the second rendering image are the same, the obtained target rendering image also has the same resolution, and the composition of the first rendering image and the second rendering image may be performed on a pixel-by-pixel basis, which specifically includes the following steps:
in step S701, each second pixel point in the second rendered image is traversed, and a current first pixel point corresponding to the currently traversed second pixel point in the first rendered image is determined.
Specifically, a current first pixel point corresponding to the same pixel coordinate in the first rendered image is determined according to the pixel coordinate of the current ergodic second pixel point.
In step S702, first pixel attribute information of the current first pixel point and second pixel attribute information of the current traversal second pixel point are determined.
In step S703, target pixel attribute information of the current target pixel point is determined according to the first pixel attribute information and the second pixel attribute information.
Specifically, for a group of matched current traversal second pixel points and current first pixel points, the pixel attribute information of each dimension is respectively fused to obtain the pixel attribute information of the current target pixel point in each dimension.
In step S704, a target rendered image is generated according to the target pixel attribute information of each current target pixel point; and the current target pixel point is a pixel point in the target rendering image, which has the same pixel coordinates with the current traversal second pixel point and the current first pixel point.
In the pixel-by-pixel matching and synthesizing mode, the pixel coordinates of the current target pixel point are the same as the pixel coordinates of the current traversal second pixel point and the current first pixel point.
In the above embodiment, the first rendered image and the second rendered image are synthesized on a pixel-by-pixel basis in consideration of the same resolution, so that a final target rendered image can be obtained. The generation of the target rendering image effectively considers the efficiency and the quality of image rendering, so that real-time and efficient image rendering experience and a high-quality image rendering result can be provided at a terminal. In addition, in the above embodiments, the second pixel point is taken as a traversal object, and the first pixel point may also be taken as a traversal object, which is not described herein again.
In one possible implementation, as shown in fig. 9, step S703 may include the following steps:
in step S7031, when the second pixel attribute information indicates that the currently traversed second pixel is a shadow-type pixel, the first pixel attribute information and the second pixel attribute information are multiplied to obtain target pixel attribute information of the current target pixel.
If the currently traversed second pixel point is a shadow-type pixel point, the first pixel attribute information and the second pixel attribute information can be multiplied by a preset proportion according to attribute information dimensions, so that target pixel attribute information of the current target pixel point in each attribute information dimension is obtained. The preset proportion can be configured in advance, and a threshold value is set for the multiplication result.
In step S7032, when the second pixel attribute information indicates that the currently traversed second pixel is an illumination-type pixel, an average value of the first pixel attribute information and the second pixel attribute information is determined as target pixel attribute information of the current target pixel.
If the currently traversed second pixel point is the illumination type pixel point, the first pixel attribute information and the second pixel attribute information can be respectively averaged according to the attribute information dimensions, so that target pixel attribute information of the current target pixel point in each attribute information dimension is obtained.
In the above embodiment, when synthesizing pixel by pixel, the synthesis result is optimized according to the type of the pixel, so as to improve the illumination effect, such as light and dark contrast, in the target rendering image, so as to approach the real effect or the specific style effect corresponding to the three-dimensional scene.
Further, the method provided by the embodiment of the present disclosure may further include:
in step S70321, a first weight corresponding to the first pixel attribute information and a second weight corresponding to the second pixel attribute information are determined according to the primitive material information corresponding to the currently traversed second pixel point.
When the three-dimensional image is modeled, different synthesis weight proportions can be set in advance for different materials, and then the synthesis weight proportions of a first pixel point in a first rendering image and a second pixel point in a second rendering image, namely a first weight corresponding to the first pixel attribute information and a second weight corresponding to the second pixel attribute information, can be determined according to the corresponding relation between the pixel point and the primitive.
In step S7032, a weighted average of the first pixel attribute information and the second pixel attribute information is determined as the target pixel attribute information of the current target pixel point, based on the first weight and the second weight.
Specifically, the attribute information of the first pixel attribute information and the attribute information of the second pixel attribute information in each dimension are weighted, summed and averaged, so that a synthetic result is optimized to better meet the rendering requirements of a specific scene and a specific style. For example, if the skin material is assumed that 30% of photons enter the skin, the first pixel attribute information corresponds to a first weight of 70% and the second pixel attribute information corresponds to a second weight of 30%.
In the embodiment, the corresponding composite weight ratio is set according to the characteristics of different materials, so that the rendering requirements of a specific scene and a specific style can be better met, and the image quality of the target rendered image is improved.
As can be seen from the technical solutions provided by the embodiments of the present specification, in the technical solutions provided by the embodiments of the present specification, a collaborative rendering manner of a terminal and a server is adopted, the terminal mainly performs rasterization rendering on data to be rendered to obtain a first rendered image, the server performs ray tracing on target data in the data to be rendered to obtain a second rendered image, where the target data includes at least one target primitive in a primitive list in the data to be rendered, and the target primitive satisfies a preset category condition, and finally the terminal synthesizes the first rendered image and the second rendered image to obtain the target rendered image. According to the technical scheme provided by the embodiment of the disclosure, the image rendering efficiency can be effectively improved through the rasterization rendering of the terminal, and the illumination effect of the image is enhanced through the real-time rendering based on ray tracing by the server, so that the rendering quality of the image synthesized by the terminal can be improved; in addition, the server does not perform ray tracing on all primitives in the data to be rendered, but screens and cuts out target primitives according to preset category conditions and performs ray tracing rendering on the target primitives, on one hand, the calculated amount of the server in the rendering process is reduced, the rendering duration is shortened, so that the rendering result of the server end can be provided for the terminal more quickly and in real time, on the one hand, a more flexible ray tracing rendering scheme can be realized according to the setting of the preset category conditions, on the other hand, the hardware requirement on the server is also reduced, and the requirement for larger-scale rendering can be met. Therefore, the technical scheme provided by the embodiment of the disclosure adopts a collaborative rendering mode of the terminal and the server, and effectively considers both the efficiency and the quality of image rendering, so that real-time and efficient image rendering experience and high-quality image rendering results can be provided at the terminal.
Fig. 10 is a block diagram illustrating an image rendering apparatus according to an exemplary embodiment. Referring to fig. 10, the apparatus 1000 includes:
a first obtaining module 1010 configured to perform obtaining data to be rendered; the data to be rendered comprises a primitive list;
a first rendering module 1020 configured to perform rasterization rendering on the data to be rendered to obtain a first rendered image;
a first sending module 1030 configured to send the data to be rendered to a server and obtain a second rendered image returned by the server; the second rendering image is obtained by the server performing real-time ray tracing on target data in the data to be rendered; the target data comprises at least one target primitive in the primitive list; the target graphic primitive meets a preset category condition;
a composition module 1040 configured to perform composition of the first rendering image and the second rendering image to obtain a target rendering image.
Optionally, the synthesis module 1040 includes:
the traversal unit is configured to execute traversal of each second pixel point in the second rendering image and determine a current first pixel point corresponding to the current traversal second pixel point in the first rendering image;
a pixel attribute information determination unit configured to perform determining first pixel attribute information of the current first pixel point and second pixel attribute information of the current traversal second pixel point;
a target pixel attribute information determining unit configured to determine target pixel attribute information of a current target pixel point according to the first pixel attribute information and the second pixel attribute information;
a target rendering image generation unit configured to generate the target rendering image according to target pixel attribute information of each of the current target pixel points; and the current target pixel point is a pixel point in the target rendering image, and the pixel point has the same pixel coordinate with the current traversal second pixel point and the current first pixel point.
Optionally, the target pixel attribute information determining unit includes:
and the multiplication subunit is configured to multiply the first pixel attribute information and the second pixel attribute information to obtain the target pixel attribute information of the current target pixel point under the condition that the second pixel attribute information indicates that the currently traversed second pixel point is a shadow pixel point.
Optionally, the target pixel attribute information determining unit further includes:
an averaging subunit configured to determine, when the second pixel attribute information indicates that the currently traversed second pixel point is an illumination-type pixel point, an average of the first pixel attribute information and the second pixel attribute information as the target pixel attribute information of the current target pixel point.
Optionally, the target pixel attribute information determining unit further includes:
the weight determining subunit is configured to execute determining a first weight corresponding to the first pixel attribute information and a second weight corresponding to the second pixel attribute information according to the primitive material information corresponding to the currently traversed second pixel;
a weighted average subunit configured to perform determining a weighted average of the first pixel attribute information and the second pixel attribute information as the target pixel attribute information of the current target pixel point based on the first weight and the second weight.
Optionally, the first rendering module 1020 includes:
a primitive matching unit configured to perform determining a fragment corresponding to a primitive in the primitive list; the fragment is a pixel area in the two-dimensional image;
a first pixel attribute information determination unit configured to perform traversal of each of the fragments, and determine first pixel attribute information of a first pixel point in each of the fragments;
and the first rendering image generating unit is configured to generate a first rendering image according to the first pixel attribute information of the first pixel point in each fragment.
Fig. 11 is a block diagram illustrating another image rendering apparatus according to an exemplary embodiment. Referring to fig. 11, the apparatus 1100 includes:
a second obtaining module 1110, configured to perform obtaining of data to be rendered sent by a terminal; the data to be rendered comprises a primitive list;
a data filtering module 1120 configured to perform determining target data from the data to be rendered; the target data comprises at least one target primitive in the primitive list; the target graphic primitive meets a preset category condition;
a second rendering module 1130 configured to perform real-time ray tracing on the target data to obtain a second rendered image;
a second sending module 1140, configured to execute sending the second rendered image to the terminal, so that the terminal synthesizes the first rendered image and the second rendered image to obtain a target rendered image; the first rendering image is obtained by performing rasterization rendering on the data to be rendered by the terminal.
Optionally, the primitive list includes at least one primitive; the data filtering module 1120 includes:
a material information determination unit configured to perform determining primitive material information of each primitive in the primitive list;
and the first class screening unit is configured to perform, when the primitive material information indicates that the primitive meets the preset class condition, taking the primitive as the target primitive.
Optionally, the primitive list includes at least one primitive; the data filtering module 1120 further comprises:
a depth information determination unit configured to perform determining primitive depth information for each primitive in the primitive list;
a second category screening unit configured to perform, in a case that the primitive depth information indicates that the primitive satisfies the preset category condition, regarding the primitive as the target primitive.
Optionally, the second rendering module 1130 includes:
an initial image acquisition unit configured to perform acquisition of an initial two-dimensional image, the two-dimensional image including at least one second pixel point;
the light projection unit is configured to project simulation light by taking a viewpoint as a starting point and taking a connecting line direction of the viewpoint and the second pixel point as a light direction;
the ray tracing unit is configured to perform ray tracing according to a sub-surface scattering phenomenon of the simulated ray in the at least one target primitive, and determine second pixel attribute information of a second pixel point;
and the second rendering unit is configured to generate the second rendering image according to the second pixel attribute information of each second pixel point in the two-dimensional image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement an image rendering method as in embodiments of the present disclosure.
Fig. 12 is a block diagram illustrating an electronic device, which may be a terminal, for implementing an image rendering method according to an exemplary embodiment, and an internal structure diagram of the electronic device may be as shown in fig. 12. The electronic device comprises a processor, a memory, a network interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The network interface of the electronic device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement an image rendering method. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
Fig. 13 is a block diagram illustrating an electronic device, which may be a server, for implementing an image rendering method according to an exemplary embodiment, and an internal structure diagram of the electronic device may be as shown in fig. 13. The electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The network interface of the electronic device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement an image rendering method.
It will be understood by those skilled in the art that the configurations shown in fig. 12 and 13 are only block diagrams of some configurations relevant to the present disclosure, and do not constitute a limitation on the electronic device to which the present disclosure is applied, and a particular electronic device may include more or less components than those shown in the figures, or combine certain components, or have a different arrangement of components.
In an exemplary embodiment, there is also provided a computer-readable storage medium including instructions that, when executed by a processor of an electronic device, enable the electronic device to perform an image rendering method in an embodiment of the present disclosure.
In an exemplary embodiment, a computer program product is also provided, comprising computer instructions which, when executed by a processor, implement an image rendering method in embodiments of the present disclosure.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, the computer program may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. An image rendering method is applied to a terminal, and the method comprises the following steps:
acquiring data to be rendered; the data to be rendered comprises a primitive list;
performing rasterization rendering on the data to be rendered to obtain a first rendering image;
sending the data to be rendered to a server, and acquiring a second rendered image returned by the server; the second rendering image is obtained by the server performing real-time ray tracing on target data in the data to be rendered; the target data comprises at least one target primitive in the primitive list; the target graphic primitive meets a preset category condition;
and synthesizing the first rendering image and the second rendering image to obtain a target rendering image.
2. The image rendering method of claim 1, wherein the synthesizing the first rendering image and the second rendering image to obtain a target rendering image comprises:
traversing each second pixel point in the second rendering image, and determining a current first pixel point corresponding to the currently traversed second pixel point in the first rendering image;
determining first pixel attribute information of the current first pixel point and second pixel attribute information of the current traversal second pixel point;
determining target pixel attribute information of a current target pixel point according to the first pixel attribute information and the second pixel attribute information;
generating the target rendering image according to the target pixel attribute information of each current target pixel point; and the current target pixel point is a pixel point in the target rendering image, and the pixel point has the same pixel coordinate with the current traversal second pixel point and the current first pixel point.
3. The image rendering method of claim 2, wherein the determining the target pixel attribute information of the current target pixel point according to the first pixel attribute information and the second pixel attribute information comprises:
and under the condition that the second pixel attribute information indicates that the currently traversed second pixel point is a shadow pixel point, multiplying the first pixel attribute information by the second pixel attribute information to obtain the target pixel attribute information of the current target pixel point.
4. The image rendering method of claim 2, wherein determining the target pixel attribute information of the target pixel point according to the first pixel attribute information and the second pixel attribute information further comprises:
and under the condition that the second pixel attribute information indicates that the currently traversed second pixel point is an illumination pixel point, determining the average value of the first pixel attribute information and the second pixel attribute information as the target pixel attribute information of the current target pixel point.
5. The image rendering method of claim 2, wherein determining the target pixel attribute information of the current target pixel point according to the first pixel attribute information and the second pixel attribute information further comprises:
determining a first weight corresponding to the first pixel attribute information and a second weight corresponding to the second pixel attribute information according to the pixel material information corresponding to the currently traversed second pixel;
determining a weighted average of the first pixel attribute information and the second pixel attribute information as the target pixel attribute information of the current target pixel point based on the first weight and the second weight.
6. The image rendering method according to claim 1, wherein the performing rasterization rendering on the data to be rendered to obtain a first rendered image comprises:
determining the fragments corresponding to all the primitives in the primitive list; the fragment is a pixel area in a two-dimensional image;
traversing each fragment, and determining first pixel attribute information of a first pixel point in each fragment;
and generating a first rendering image according to the first pixel attribute information of the first pixel point in each fragment.
7. An image rendering method applied to a server, the method comprising:
acquiring data to be rendered sent by a terminal; the data to be rendered comprises a primitive list;
determining target data from the data to be rendered; the target data comprises at least one target primitive in the primitive list; the target graphic primitive meets a preset category condition;
performing real-time ray tracing on the target data to obtain a second rendering image;
sending the second rendering image to the terminal so that the terminal synthesizes the first rendering image and the second rendering image to obtain a target rendering image; the first rendering image is obtained by performing rasterization rendering on the data to be rendered by the terminal.
8. The image rendering method according to claim 7, wherein the determining target data from the data to be rendered comprises:
determining primitive material information of each primitive in the primitive list;
and taking the primitive as the target primitive under the condition that the primitive material information indicates that the primitive meets the preset category condition.
9. The image rendering method according to claim 7, wherein the determining target data from the data to be rendered further comprises:
determining primitive depth information of each primitive in the primitive list;
and taking the primitive as the target primitive under the condition that the primitive depth information indicates that the primitive meets the preset category condition.
10. The image rendering method of claim 7, wherein the performing real-time ray tracing on the target data to obtain a second rendered image comprises:
acquiring an initial two-dimensional image, wherein the two-dimensional image comprises at least one second pixel point;
projecting simulation light by taking a viewpoint as a starting point and taking the connecting direction of the viewpoint and the second pixel point as a light direction;
performing ray tracing according to the sub-surface scattering phenomenon of the simulated rays in the at least one target primitive, and determining second pixel attribute information of a second pixel point;
and generating the second rendering image according to the second pixel attribute information of each second pixel point in the two-dimensional image.
11. An image rendering apparatus applied to a terminal, the apparatus comprising:
a first acquisition module configured to perform acquisition of data to be rendered; the data to be rendered comprises a primitive list;
the first rendering module is configured to perform rasterization rendering on the data to be rendered to obtain a first rendering image;
the first sending module is configured to send the data to be rendered to a server and acquire a second rendered image returned by the server; the second rendering image is obtained by the server performing real-time ray tracing on target data in the data to be rendered; the target data comprises at least one target primitive in the primitive list; the target graphic primitive meets a preset category condition;
and the synthesis module is configured to synthesize the first rendering image and the second rendering image to obtain a target rendering image.
12. An image rendering apparatus applied to a server, the apparatus comprising:
the second acquisition module is configured to execute acquisition of data to be rendered, which is sent by the terminal; the data to be rendered comprises a primitive list;
a data screening module configured to perform determining target data from the data to be rendered; the target data comprises at least one target primitive in the primitive list; the target graphic primitive meets a preset category condition;
the second rendering module is configured to perform real-time ray tracing on the target data to obtain a second rendering image;
the second sending module is configured to send the second rendering image to the terminal, so that the terminal synthesizes the first rendering image and the second rendering image to obtain a target rendering image; the first rendering image is obtained by performing rasterization rendering on the data to be rendered by the terminal.
13. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image rendering method of any of claims 1 to 10.
14. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the image rendering method of any of claims 1 to 10.
CN202211067613.8A 2022-09-01 2022-09-01 Image rendering method and device, electronic equipment and storage medium Pending CN115496845A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211067613.8A CN115496845A (en) 2022-09-01 2022-09-01 Image rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211067613.8A CN115496845A (en) 2022-09-01 2022-09-01 Image rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115496845A true CN115496845A (en) 2022-12-20

Family

ID=84468267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211067613.8A Pending CN115496845A (en) 2022-09-01 2022-09-01 Image rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115496845A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116033065A (en) * 2022-12-29 2023-04-28 维沃移动通信有限公司 Playing method, playing device, electronic equipment and readable storage medium
CN116206035A (en) * 2023-01-12 2023-06-02 北京百度网讯科技有限公司 Face reconstruction method, device, electronic equipment and storage medium
CN116824028A (en) * 2023-08-30 2023-09-29 腾讯科技(深圳)有限公司 Image coloring method, apparatus, electronic device, storage medium, and program product

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116033065A (en) * 2022-12-29 2023-04-28 维沃移动通信有限公司 Playing method, playing device, electronic equipment and readable storage medium
CN116206035A (en) * 2023-01-12 2023-06-02 北京百度网讯科技有限公司 Face reconstruction method, device, electronic equipment and storage medium
CN116206035B (en) * 2023-01-12 2023-12-01 北京百度网讯科技有限公司 Face reconstruction method, device, electronic equipment and storage medium
CN116824028A (en) * 2023-08-30 2023-09-29 腾讯科技(深圳)有限公司 Image coloring method, apparatus, electronic device, storage medium, and program product
CN116824028B (en) * 2023-08-30 2023-11-17 腾讯科技(深圳)有限公司 Image coloring method, apparatus, electronic device, storage medium, and program product

Similar Documents

Publication Publication Date Title
CN111340928B (en) Ray tracing-combined real-time hybrid rendering method and device for Web end and computer equipment
US20230075270A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN115496845A (en) Image rendering method and device, electronic equipment and storage medium
CN111369655B (en) Rendering method, rendering device and terminal equipment
JP4864972B2 (en) 2D editing metaphor for 3D graphics (METAPHOR)
US11663775B2 (en) Generating physically-based material maps
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
US20170213394A1 (en) Environmentally mapped virtualization mechanism
KR20080018404A (en) Computer readable recording medium having background making program for making game
CN113112579A (en) Rendering method, rendering device, electronic equipment and computer-readable storage medium
KR20240001021A (en) Image rendering method and apparatus, electronic device, and storage medium
US20230230311A1 (en) Rendering Method and Apparatus, and Device
JP2023029984A (en) Method, device, electronic apparatus, and readable storage medium for generating virtual image
CN114119853B (en) Image rendering method, device, equipment and medium
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
CN115830208A (en) Global illumination rendering method and device, computer equipment and storage medium
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
CN111583379A (en) Rendering method and device of virtual model, storage medium and electronic equipment
Fujii et al. RGB-D image inpainting using generative adversarial network with a late fusion approach
CN116385619B (en) Object model rendering method, device, computer equipment and storage medium
US20240020915A1 (en) Generative model for 3d face synthesis with hdri relighting
CN115953524B (en) Data processing method, device, computer equipment and storage medium
CN116485973A (en) Material generation method of virtual object, electronic equipment and storage medium
CN116977539A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN115359172A (en) Rendering method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination