CN113781620B - Rendering method and device in game and electronic equipment - Google Patents
Rendering method and device in game and electronic equipment Download PDFInfo
- Publication number
- CN113781620B CN113781620B CN202111076331.XA CN202111076331A CN113781620B CN 113781620 B CN113781620 B CN 113781620B CN 202111076331 A CN202111076331 A CN 202111076331A CN 113781620 B CN113781620 B CN 113781620B
- Authority
- CN
- China
- Prior art keywords
- rendering
- pixel point
- value
- depth
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
Abstract
The invention provides a rendering method, a rendering device and electronic equipment in a game, which are used for acquiring a plurality of depth maps of a target image and determining an initial rendering result of a rendering object corresponding to each depth map; dividing the target image into an edge region, a non-edge region and a transition region; and based on an initial rendering result of the rendering object corresponding to each depth map, rendering the rendering area by adopting a rendering mode preset by each rendering area to obtain a rendered target image. In the mode, the target image is divided into the edge area, the non-edge area and the transition area through a plurality of depth maps of the target image and the initial rendering result corresponding to each depth map, different rendering modes are adopted for rendering aiming at different areas, the problems of incorrect mosaic and shielding relation of the edge area and the transition area are avoided, the rendering effect and the rendering efficiency are improved, and further the game experience of a player is improved.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a rendering method and apparatus in a game, and an electronic device.
Background
In a game, adding special effects to target objects is a very common rendering mode, and in order to achieve a better game effect, the number of tracks of the special effects and the screen occupation area of each track during rendering are higher and higher, so that the filling rate of the special effects is higher and higher, and the rendering efficiency of the game special effects is reduced. In the related art, rendering is generally performed in a mode of reducing special effect resolution, special effects are drawn in a downsampled depth map, and a target image is rendered; however, due to the interpenetration relation between the special effect and the target image, the problems of mosaic, incorrect shielding relation and the like can be generated when the special effect drawn by reducing the resolution is mixed with the target object, the rendering effect and the game quality are reduced, and the game experience of a player is influenced.
Disclosure of Invention
Accordingly, the present invention is directed to a method and apparatus for rendering in a game, and an electronic device, so as to improve the rendering effect and the game quality, and further improve the game experience of a player.
In a first aspect, an embodiment of the present invention provides a rendering method in a game, where the method includes: acquiring a plurality of depth maps of a target image; wherein the target image comprises at least one image area; the depth map comprises depth information corresponding to the image area; drawing preset rendering objects based on the depth maps to obtain initial rendering results of the rendering objects corresponding to each depth map; dividing a target image into a plurality of rendering areas based on a plurality of depth maps and an initial rendering result corresponding to each depth map; wherein the rendering region includes an edge region, a non-edge region, and a transition region between the edge region and the non-edge region; and based on an initial rendering result of the rendering object corresponding to each depth map, rendering the rendering area by adopting a rendering mode preset by each rendering area to obtain a rendered target image.
Further, the step of acquiring a plurality of depth maps of the target image includes: according to the depth information corresponding to each image area in the target image, carrying out downsampling processing on the target image to obtain a plurality of depth maps; the depth information corresponding to the image region in different depth maps is different for the same image region.
Further, the plurality of depth maps includes a first depth map and a second depth map; for the same image area, the depth information corresponding to the image area in the first depth map is the maximum depth value in the image area, and the depth information corresponding to the image area in the second depth map is the minimum depth value in the image area.
Further, the step of drawing a preset rendering object based on the depth map to obtain an initial rendering result of the rendering object corresponding to each depth map includes: generating rendering targets corresponding to each depth map aiming at a plurality of depth maps; responding to a drawing instruction aiming at a rendering object, and acquiring a depth value, a color value and a transparency value of each pixel point in the rendering object; for each pixel point in each depth map, if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as an initial rendering result of the pixel point corresponding to the depth map; the target transparency value is determined based on a preset mixed mode of the rendering object; and determining an initial rendering result of each pixel point in the depth map as a pixel value of a pixel point corresponding to a rendering target corresponding to the depth map.
Further, the transparency value of each pixel point is stored in the rendering target; the step of determining the target transparency value based on the preset mixing mode of the rendering object comprises the following steps: if the preset mixing mode is a multiplication mixing mode, calculating a product value of the transparency value of the pixel point in the rendering object and the transparency value of the pixel point stored in the rendering object; the difference value of the product value subtracted from the transparency value of the pixel point stored in the rendering target is determined as a target transparency value; and if the preset mixed mode is an addition mixed mode, determining the transparency value of the pixel point stored in the rendering target as a target transparency value.
Further, the step of dividing the target image into a plurality of rendering areas based on the plurality of depth maps and the initial rendering result corresponding to each depth map includes: calculating the error weight of each pixel point in the depth map based on the plurality of depth maps and the initial rendering result corresponding to each depth map; the error weight is used for indicating the error magnitude of the pixel value of each channel of the pixel point; for each pixel point in the depth map, calculating an average depth value of the depth values of the pixel points in the plurality of depth maps; and dividing the target image into a plurality of rendering areas according to the error weight and the average depth value of each pixel point in the depth map.
Further, the step of calculating the error weight of each pixel point in the depth map based on the plurality of depth maps and the initial rendering result corresponding to each depth map includes: for each pixel in the depth map, setting the error weight of the pixel as:
A=abs(zmax-zmin)*(maxDepthcolor-inDepthcolor)*(dzmaxX+dzmaxY);
wherein A is the error weight of the pixel point; abs is the absolute value of "(zmax-zmin), (maxDepthcolor-minDepthcolor), (dzmaxx+dzmaxy)"; zmax is a linear depth value of a pixel point in the first depth map; zmin is the linear depth of the pixel point in the second depth map; the maxsetting color is the color value of the pixel point in the initial rendering result corresponding to the first depth map; the minDepthcolor is the color value of the pixel point in the initial rendering result corresponding to the second depth map; dzmaxX is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the x direction; dzmaxY is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the y direction.
Further, the step of dividing the target image into a plurality of rendering areas according to the error weight and the average depth value of each pixel point in the depth map includes: for each pixel point in the depth map, determining the error weight of the pixel point as a first pixel value of the pixel; determining a designated pixel point with an average depth value smaller than that of the pixel point from adjacent pixel points of the pixel point, and determining the maximum error weight in the pixel point and the designated pixel point as a second pixel value of the pixel point; wherein the adjacent pixel points include: a current pixel point and a pixel point adjacent to the current pixel point in a horizontal direction and a vertical direction; and dividing the target image into a plurality of rendering areas according to the first pixel value and the second pixel value of each pixel point in the depth map.
Further, the step of dividing the target image into a plurality of rendering areas according to the first pixel value and the second pixel value of each pixel point in the depth map includes: for each image region in the target image, performing the following operations: if the first pixel value and the second pixel value of the target pixel point corresponding to the image area in the depth map are smaller than a preset threshold value, dividing the image area into a non-edge area; if the first pixel value of the target pixel corresponding to the image area in the depth image is smaller than a preset threshold value and the second pixel value of the target pixel corresponding to the image area in the depth image is larger than the preset threshold value, dividing the image area into transition areas; and if the first pixel value of the target pixel point corresponding to the image region in the depth map is larger than a preset threshold value, dividing the image region into edge regions.
Further, based on an initial rendering result of the rendering object corresponding to each depth map, a rendering mode preset for each rendering area is adopted to render the rendering area, and a rendered target image is obtained, which includes: determining a final rendering result of each pixel point in the target image by adopting a preset rendering mode of each rendering area based on an initial rendering result corresponding to each depth map; and mixing the final rendering result to the corresponding pixel point in the target image to obtain the rendered target image.
Further, based on the initial rendering result corresponding to each depth map, determining a final rendering result of each pixel point in the target image by adopting a preset rendering mode of each rendering area, including: for each pixel point of a non-edge area in the target image, setting the final rendering result of the pixel point as: res1=maxdepthrt (1-Ad) +mainrt Ad; wherein Res1 is the rendering result of the pixel point; maxDepthRT is a pixel value of a target pixel point color channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to a first depth map; ad is a pixel value of a transparency channel of a target pixel point corresponding to an image area to which the pixel point belongs in an initial rendering result corresponding to the first depth map; mainRT is the pixel value of the pixel color channel in the target image.
Further, based on the initial rendering result corresponding to each depth map, determining a final rendering result of each pixel point in the target image by adopting a preset rendering mode of each rendering area, including: for each pixel point of the transition region in the target image, setting the final rendering result of the pixel point as: res2=mindepthrt (1-Ax) +mainrt Ax; wherein Res2 is the rendering result of the pixel point; minDepthRT is the pixel value of the target pixel point color channel corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; ax is the pixel value of the transparency channel of the target pixel point corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; mainRT is the pixel value of the pixel color channel in the target image.
Further, based on the initial rendering result corresponding to each depth map, determining a final rendering result of each pixel point in the target image by adopting a preset rendering mode of each rendering area, including: for each pixel point of the edge area in the target image, setting the final rendering result of the pixel point as follows: res3=middepthrt (1-Am) +mainrt Am; midDepthRes = lerp (MaxDepthRT, minDepthRT, d-dx)/(dd-dx)); wherein Res3 is the rendering result of the pixel point; maxDepthRT is a pixel value of a target pixel point color channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to a first depth map; minDepthRT is the pixel value of the target pixel point color channel corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; d is the depth value of the pixel point in the target image; dd is a depth value of a target pixel point corresponding to an image area to which the pixel point belongs in the first depth map; dx is the depth value of the target pixel point corresponding to the image area to which the pixel point belongs in the second depth map; lerp is a linear interpolation function; midDepthRes is the interpolated image; midDepthRT is the color value of the pixel color channel in the interpolation image MidDepthRes; am is the transparency value of the pixel transparency channel in the interpolated image MidDepthRes; mainRT is the pixel value of the pixel color channel in the target image.
In a second aspect, an embodiment of the present invention provides a rendering apparatus in a game, including: the acquisition module is used for acquiring a plurality of depth maps of the target image; wherein the target image comprises at least one image area; the depth map comprises depth information corresponding to the image area; aiming at the same image area, the depth information corresponding to the image area in different depth maps is different; the initial rendering result determining module is used for drawing preset rendering objects based on the depth maps to obtain initial rendering results of the rendering objects corresponding to each depth map; the rendering region determining module is used for dividing the target image into a plurality of rendering regions based on the plurality of depth maps and the initial rendering result corresponding to each depth map; wherein the rendering region includes an edge region, a non-edge region, and a transition region between the edge region and the non-edge region; the rendering module is used for rendering the rendering area by adopting a preset rendering mode of each rendering area based on an initial rendering result of the rendering object corresponding to each depth map, and a rendered target image is obtained.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, the memory storing machine-executable instructions executable by the processor, the processor executing the machine-executable instructions to implement the in-game rendering method of any one of the first aspects.
In a fourth aspect, embodiments of the present invention provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a method of rendering in a game of any one of the first aspects.
The embodiment of the invention has the following beneficial effects:
the invention provides a rendering method, a rendering device and electronic equipment in a game, which are used for acquiring a plurality of depth maps of a target image; drawing preset rendering objects based on the depth maps to obtain initial rendering results of the rendering objects corresponding to each depth map; dividing a target image into an edge region, a non-edge region and a transition region between the edge region and the non-edge region based on a plurality of depth maps and initial rendering results corresponding to each depth map; and based on an initial rendering result of the rendering object corresponding to each depth map, rendering the rendering area by adopting a rendering mode preset by each rendering area to obtain a rendered target image. In the mode, the target image is divided into the edge area, the non-edge area and the transition area through a plurality of depth maps of the target image and the initial rendering result corresponding to each depth map, different rendering modes are adopted for rendering aiming at different areas, the problems of incorrect mosaic and shielding relation of the edge area and the transition area are avoided, the rendering effect and the rendering efficiency are improved, and further the game experience of a player is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a rendering method in a game according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method of in-game rendering provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a rendering device in a game according to an embodiment of the present invention;
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, adding special effects to target objects in games is a very common rendering mode, and along with improvement of game quality, in order to achieve a better game effect, the number of tracks of the special effects and the screen occupation area of each track in rendering are higher and higher, so that the filling rate of the special effects is higher and higher, and the rendering efficiency of the special effects of the games is reduced. Because of the characteristics of the special effects during rendering, particularly particles in the special effects are generally common, the resolution of the special effects is reduced for rendering, and the effect on the game experience of players is less in most cases.
The current methods for solving the above problems mainly comprise the following two types:
1. Firstly, downsampling a target image, generating a maximum depth map with 1/4 resolution, rendering a special effect with 1/4 resolution, and recording the expected position and variance of the special effect on a rendering target during rendering; and carrying out interpolation mixing on the special effect and the target image according to normal distribution according to the position and variance information recorded on the rendering target corresponding to the maximum depth map with 1/4 resolution. In this way, when drawing a special effect on a rendering target, it is necessary to assume that the special effect conforms to a normal distribution. This assumption works well when the effect variance is small, or the effect is not interleaved with other objects at the same pixel point on the target image. However, when the distribution variance of the special effects is large and other objects with the same pixel point on the target image are inserted, the assumed error is large, so that the rendering result is incorrect when the special effects are mixed into the target image.
2. Firstly, rendering a scene by adopting a conventional method, writing a depth value of a target image into a buffer area to obtain a depth buffer, and downsampling the depth buffer into a low-resolution rendering target; rendering the particles into an off-screen rendering object, and performing depth test with a depth buffer; and carrying out up-sampling on the rendering target, and adding the rendering object into the up-sampled rendering target to obtain a rendered image. Since up-sampling may cause mosaic problem, an edge detection method is adopted to extract areas with discontinuous depth values, and the special effects are rendered again with full resolution in the areas. In this way, because the correct special effect is drawn by drawing the special effect of the area with discontinuous depth values again in the edge area, the number of rendering and calling drawing instructions can be increased, so that the rendering efficiency and performance are affected, and the rendering efficiency is poor in improving effect.
Based on the above problems, the embodiment of the invention provides a method, a device and an electronic device for rendering in a game, which can be applied to the game with a rendering scene.
For the sake of understanding the present embodiment, first, a method for rendering in a game disclosed in the present embodiment will be described in detail, as shown in fig. 1, where the method includes the following steps:
step S102, obtaining a plurality of depth maps of a target image; wherein the target image comprises at least one image area; the depth map comprises depth information corresponding to the image area;
the target image is typically an image of a designated object in a game scene or an image of a designated area, and typically includes elements such as a game character controlled by a player, a game character not controlled by a player, props, buildings, environments, trees, roads, and the like in the game scene. The image areas included in the target image have the same size, and each image area may include a plurality of pixels in the target image, such as 4 pixels, 9 pixels, 16 pixels, and the like; the number of specific pixel points can be set according to the size of the target image and actual needs, and each image area is not overlapped with each other.
The depth information corresponding to the image area may be a maximum depth value, a minimum depth value, an average depth value of all the depth values, or the like in the image area. For example, if the plurality of depth maps includes three depth maps, for the same image area, the depth information corresponding to the image area in the first depth map may be a maximum depth value, the depth information corresponding to the image area in the second depth map may be a minimum depth value, and the depth information corresponding to the image area in the third depth map may be an average depth value.
For example, it is required to change the size of the target image with 1920×1080 resolution to 1/4 of the original depth map, i.e. 480×270 resolution, where the multiple image areas of the target image include 16 pixels; for the first depth map, the depth value of each pixel in the depth map is the maximum depth value in the corresponding image region (i.e., 16 pixels). For the second depth map, the depth value of each pixel in the depth map is the minimum depth value in the corresponding image region (i.e., 16 pixels). In general, the step of acquiring multiple depth maps of the target image may be understood as a process of downsampling the depth image of the target image, and for each sampling area (i.e., image area) of the target image, acquiring a maximum depth value, a minimum depth value, or an average depth value of the sampling area to obtain multiple depth maps.
Step S104, drawing preset rendering objects based on the depth maps to obtain initial rendering results of the rendering objects corresponding to each depth map;
the preset rendering object generally refers to a special effect that the target image needs to be drawn and rendered, and in this embodiment, the preset rendering object refers to a game special effect, and generally refers to a preset special effect image. Usually, special effects that are not normally present in reality are pre-manufactured by computer software. The size of the rendering object may be the same as the size of the depth map, and the color value, the transparency value, and the depth value of each pixel point are stored in the rendering object. The initial rendering result generally includes a target color value and a target transparency value for each pixel. Wherein the transparency value may also be referred to as an alpha value.
In a specific embodiment, in order to improve the drawing efficiency, in the same drawing instruction, a preset rendering object can be drawn for multiple depth maps at the same time, so as to obtain a rendering object corresponding to each depth map. The display card and the pixel shader can be respectively adopted to draw the rendering object based on different depth maps, and specifically, the display card can be adopted to carry out depth test according to the depth value of each pixel point in the depth map and the depth value of each pixel point in the rendering object, and the depth map is cut according to the rendering object so as to obtain the initial rendering result of the rendering object corresponding to the depth map; and simultaneously, carrying out depth test by adopting a pixel shader according to the depth value of each pixel point in the depth map and the depth value of each pixel point in the rendering object, and obtaining an initial rendering result of the pixel points passing the test. The initial rendering result of the rendering object corresponding to each depth map is also typically written to the rendering buffer.
In general, the step of drawing the preset rendering object based on the depth map to obtain the initial rendering result of the rendering object corresponding to each depth map may be understood as off-screen rendering, so as to apply the initial rendering result to the rendered target image before displaying the final rendered target image.
Step S106, dividing the target image into a plurality of rendering areas based on a plurality of depth maps and an initial rendering result corresponding to each depth map; wherein the rendering region includes an edge region, a non-edge region, and a transition region between the edge region and the non-edge region;
because there is a region with discontinuous depth values between the object and the rendering object in the target image, or a region with an interpenetration relation between special effects and non-special effects, if the region is rendered in a traditional manner, the region can generate problems of mosaic, incorrect shielding relation and the like, therefore, the target image can be divided into an edge region, a non-edge region and a transition region between the edge region and the non-edge region according to the obtained multiple depth maps and the initial rendering result corresponding to each depth map. The transition area is divided, and then the transition area is singly rendered, so that the rendering effect can be further improved, the rendering effect is more vivid, and flaws are avoided.
For each pixel point in the depth map, determining a rendering area to which the pixel point belongs according to the depth value of the pixel point in each depth map and the color value of the pixel point in the initial rendering result corresponding to each depth map. In actual implementation, for each pixel point in the depth map, the absolute value of the difference value of the depth value of the pixel point in the multiple depth maps can be calculated, the absolute value of the difference value of the color value of the pixel point in the initial rendering result corresponding to the multiple depth maps is calculated, and the rendering area to which the pixel point belongs is determined according to the product of the absolute values of the difference values, wherein the product is generally used for indicating the error of the pixel value of each channel of the pixel point; generally, the larger the product, the closer the pixel is to the edge region, and the smaller the product, the closer the pixel is to the non-edge region. The transition region may be determined according to the depth value of the pixel point in the multiple depth maps and the product of the pixel point, and generally, when the depth value of the pixel point is greater than the depth value of an adjacent pixel point and the error of the adjacent pixel point of the pixel point is greater, the pixel point is indicated to be in the transition region.
The image area is divided according to the positions of the pixels of the target image, specifically, the pixels of a certain area can be divided into one image area, the target image can be divided into at least one image area, namely, all the pixels in the target image are divided into one image area, or 16 pixels in the image area can be divided into 4 image areas, and each image area comprises 4 mutually adjacent pixels. Each pixel includes a variety of pixel information such as depth information, color information, and the like. The pixel information of each pixel point in the depth map only comprises the depth information of the image area, specifically the depth information of one target pixel point in the image area. A plurality of rendering regions may constitute the target image, each rendering region including at least one pixel, and generally the shape of the rendering region is various and unpredictable, and is divided according to the step S106. But the pixel information of each pixel in the rendering area is the same as the pixel information of each pixel in the target image.
Step S108, based on the initial rendering result of the rendering object corresponding to each depth map, rendering the rendering area by adopting a preset rendering mode of each rendering area to obtain a rendered target image.
And calculating the final rendering results of the pixel points of the edge region, the non-edge region and the transition region by adopting different rendering modes respectively, and then mixing the final rendering results of each pixel point into the target image to obtain a rendered target image. The rendering modes comprise a plurality of modes, and specifically comprise a first rendering mode corresponding to an edge area, a second rendering mode corresponding to a non-edge area and a third rendering mode corresponding to a transition area. The first rendering mode mainly mixes the depth values of a plurality of depth maps, the initial rendering result corresponding to each depth map, and the depth value and the color value of the target image according to a preset proportion to obtain the final rendering result of each pixel point in the edge area. The second rendering mode mainly mixes the depth value of the maximum depth map with the initial rendering result corresponding to the maximum depth map and the pixel value of the target image to obtain the final rendering result of each pixel point in the non-edge area. The third rendering mode is mainly to mix the depth value of the minimum depth map with the initial rendering result corresponding to the minimum depth map and the pixel value of the target image to obtain the final rendering result of each pixel point in the transition region.
Specifically, for the non-edge region and the transition region, the final rendering result of each pixel point in the non-edge region can be directly calculated according to the initial rendering result corresponding to the depth map and the color value of the target image. When the final rendering result of each pixel point is calculated by the non-edge area and the transition area, the initial rendering results corresponding to different depth maps are adopted; the depth map typically employed for non-edge regions is larger than the depth values of the depth map employed for transition regions. For the edge region, the final rendering result of each pixel point in the edge region can be calculated according to the depth values of the plurality of depth maps, the initial rendering result corresponding to each depth map, and the depth value and the color value of the target image.
According to the rendering method in the game, a plurality of depth maps of the target image are obtained; drawing preset rendering objects based on the depth maps to obtain initial rendering results of the rendering objects corresponding to each depth map; dividing a target image into an edge region, a non-edge region and a transition region between the edge region and the non-edge region based on a plurality of depth maps and initial rendering results corresponding to each depth map; and based on an initial rendering result of the rendering object corresponding to each depth map, rendering the rendering area by adopting a rendering mode preset by each rendering area to obtain a rendered target image. In the mode, the depth information among the plurality of depth maps is different, the target image is divided into the edge area, the non-edge area and the transition area according to the plurality of depth maps of the target image and the initial rendering result corresponding to each depth map, and different rendering modes are adopted for rendering according to different areas, so that the problems of incorrect mosaic and shielding relation of the edge area and the transition area are avoided, the rendering effect and the rendering efficiency are improved, and the game experience of a player is further improved.
The steps of acquiring a plurality of depth maps of a target image are specifically described below, including: according to the depth information corresponding to each image area in the target image, carrying out downsampling processing on the target image to obtain a plurality of depth maps; and aiming at the same image area, the depth information corresponding to the image area in different depth maps is different.
Specifically, a pixel point with the largest depth value (namely the depth information) can be extracted from each image area in the target image, each extracted pixel point is formed into a graph, and a depth graph is obtained, wherein each pixel point in the depth graph is the pixel point with the largest depth value in the corresponding image area; the method comprises the steps of obtaining a depth map by extracting a pixel point with the minimum depth value from each image area in a target image, and forming a map by each extracted pixel point to obtain the depth map, wherein each pixel point in the depth map is the pixel point with the minimum depth value in the corresponding image area; of course, a pixel with a median depth value can be extracted from each image area in the target image, and each extracted pixel is formed into a map to obtain a depth map.
In order to improve the rendering effect of the target image, the rendering area is divided more accurately, and the rendering result is drawn, generally, the depth information of the target image can be used for distinguishing the edge area of the target image or the inter-penetrating area of special effect and non-special effect, so that the plurality of depth maps comprise a first depth map and a second depth map; for the same image area, the depth information corresponding to the image area in the first depth map is the maximum depth value in the image area, and the depth information corresponding to the image area in the second depth map is the minimum depth value in the image area. The first depth map may be referred to as a maximum depth map and the second depth map may be referred to as a minimum depth map.
In actual implementation, downsampling is carried out on a depth image of a target image, a maximum depth value corresponding to each image area is obtained for each image area, and the maximum depth value is determined to be a depth value of a pixel point corresponding to the image area in a first depth image; and similarly, for each image area, acquiring a minimum depth value corresponding to the image area, and determining the minimum depth value as the depth value of the pixel point corresponding to the image area in the second depth map.
In the above manner, by acquiring the first depth map and the second depth map, richer depth information in the target image can be obtained, and further, based on the depth values in the first depth map and the second depth map, richer and finer initial rendering results and more accurate rendering areas can be acquired.
In one possible implementation manner, in order to improve the rendering effect and the rendering efficiency of the target image, a determination process of the initial rendering result is described below, which specifically includes:
(1) Generating rendering targets corresponding to each depth map aiming at a plurality of depth maps;
the rendering target is usually a buffer zone, and is mainly used for recording an initial rendering result output after rendering, and the special effect cannot be directly drawn to a target image and displayed on a screen; specifically, the generation of the rendering target corresponding to the first depth map may be referred to as a first rendering target, and the generation of the rendering target corresponding to the second depth map may be referred to as a second rendering target. In addition, the rendering target stores a transparency value for each pixel.
(2) Responding to a drawing instruction aiming at a rendering object, and acquiring a depth value, a color value and a transparency value of each pixel point in the rendering object;
The rendering object is preset, and after a drawing instruction for the rendering object is called, a depth value, a color value and a transparency value of each pixel point in the rendering object can be obtained first. In actual implementation, after a drawing instruction for a rendering object is called, a depth value, a color value and a transparency value of each pixel point in each depth map are obtained for the pixel point in the rendering object.
(3) For each pixel point in each depth map, if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as an initial rendering result of the pixel point corresponding to the depth map; the target transparency value is determined based on a preset mixed mode of the rendering object;
if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the depth map, the rendering object is indicated to be not blocked by the object in the depth map and is finally displayed in the rendered target image; conversely, if the depth value of the pixel point in the rendering object is greater than the depth value of the pixel point in the depth map, it is indicated that the rendering object is blocked by the object in the depth map and is not displayed in the rendered target image. Therefore, only when the depth value of the pixel in the rendering object is smaller than the depth value of the pixel in the depth map, the color value of the pixel in the rendering object and the target transparency value of the pixel are determined as the initial rendering result of the pixel.
As the plurality of depth maps include the first depth map and the second depth map, it can be understood that, for each pixel point in the first depth map, if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the first depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as the initial rendering result of the pixel point corresponding to the first depth map; the target transparency value is determined based on a mixing mode preset by the rendering object. Meanwhile, for each pixel point in the second depth map, if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the second depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as an initial rendering result of the pixel point corresponding to the second depth map; the target transparency value is determined based on a mixing mode preset by the rendering object.
Wherein, the transparency value of each pixel point is stored in the rendering target. The mixing modes preset by the rendering object generally comprise a multiplication mixing mode and an addition mixing mode, and naturally comprise other mixing modes; the process of determining the target transparency value when the multiplicative mixed mode and the additive mixed mode is specifically described below.
If the preset mixing mode is a multiplication mixing mode, calculating a product value of the transparency value of the pixel point in the rendering object and the transparency value of the pixel point stored in the rendering object; the difference value of the product value subtracted from the transparency value of the pixel point stored in the rendering target is determined as a target transparency value; and if the preset mixed mode is an addition mixed mode, determining the transparency value of the pixel point stored in the rendering target as a target transparency value.
Specifically, if the preset blending mode is the multiplicative blending mode, the target transparency value of the pixel point in the first depth map may be calculated by the following formula:
Ad=(1-srcAlpha)*destAlpha;
the target transparency value of the pixel point in the second depth map can be calculated by the following formula:
Ax=(1-srcAlpha)*destAlpha;
ad is a target transparency value of the pixel point in an initial rendering result corresponding to the first depth map; ax is a target transparency value of the pixel point in the initial rendering result corresponding to the second depth map; the srcAlpha is a transparency value of the pixel point stored by a preset rendering object; the destAlpha is the transparency value of the pixel point stored in the rendering target, and the manner can obtain a more accurate transparency value, so that the rendering effect is further improved.
(4) And determining an initial rendering result of each pixel point in the depth map as a pixel value of a pixel point corresponding to a rendering target corresponding to the depth map.
Rendering the target image because the initial rendering result is not drawn based on the direct-based target image; the initial rendering result is needed to be used for dividing the area of the target image, so that the initial rendering result of each pixel point in the depth map can be determined as the pixel value of the pixel point corresponding to the rendering target corresponding to the depth map, and the initial rendering result is mainly used for storing the initial rendering results corresponding to the first depth map and the second depth map.
In a specific implementation mode, a rendering object is drawn based on a first depth map through a display card, and an initial rendering result corresponding to the first depth map is obtained; specifically, the first depth map may be set as a current depth buffer of DirectX (an application program interface, abbreviated as DX), so that the first depth map may be directly adopted to perform clipping to obtain an initial rendering result corresponding to the first depth map. The bottom implementation of setting the first depth map as the current depth buffer (buffer) of DX is to let DX command the graphics card hardware to perform depth clipping through the graphics card driver. And drawing a rendering object based on the second depth map through the pixel shader to obtain an initial rendering result corresponding to the second depth map. In practice, whether the initial rendering result is obtained through the graphics card or the pixel shader, the specific drawing principle is the same, and for each pixel point in each depth map, if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the depth map, the color value of the pixel point in the rendering object is determined to be the initial rendering result; meanwhile, the target transparency value of the pixel point is determined according to the mixed mode of the rendering object, and the target transparency value of the pixel point is also determined as an initial rendering result.
In the above manner, the preset rendering object is drawn based on each depth map, the obtained initial rendering result corresponding to each depth map not only comprises the color value but also comprises the transparency value, so that the pixel value of the initial rendering result is enriched.
In a specific embodiment, in order to improve the rendering effect of the target image, the problem that the mosaic and occlusion relationship are incorrect in the edge area of the target image is avoided, and as shown in fig. 2, a process of dividing the target image into a plurality of rendering areas is specifically described below.
Step S202, calculating the error weight of each pixel point in the depth map based on a plurality of depth maps and the initial rendering result corresponding to each depth map; the error weight is used for indicating the error magnitude of the pixel value of each channel of the pixel point;
the above-mentioned magnitude of the error weight of each pixel point indicates whether the pixel point has disputes in terms of color, depth derivative, etc., for example, if the error weight is large, it may be stated that the final rendering result of the pixel point cannot be determined by simply adopting the first depth map or the second depth map and the corresponding initial rendering result thereof, but the final rendering result of the pixel point needs to be determined by adopting a hybrid interpolation method.
The color values of the initial rendering results corresponding to the first depth map and the second depth map are generally required to pass through the depth values of the first depth map and the second depth map; the error weight of each pixel point can be calculated specifically through the difference value of the depth value of the same pixel point between the first depth map and the second depth map, the difference value of the color value of the same pixel point between the initial rendering results corresponding to the first depth map and the second depth map, and the sum of the derivatives of the depth value of the same pixel point between the first depth map and the second depth map.
One embodiment is: for each pixel in the depth map, setting the error weight of the pixel as:
A=abs(zmax-zmin)*(maxDepthcolor-inDepthcolor)*(dzmaxX+dzmaxY);
wherein A is the error weight of the pixel point; abs is the absolute value of "(zmax-zmin), (maxDepthcolor-minDepthcolor), (dzmaxx+dzmaxy)"; zmax is a linear depth value of a pixel point in the first depth map; zmin is the linear depth of the pixel point in the second depth map; the maxsetting color is the color value of the pixel point in the initial rendering result corresponding to the first depth map; the minDepthcolor is the color value of the pixel point in the initial rendering result corresponding to the second depth map; dzmaxX is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the x direction; dzmaxY is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the y direction.
The linear depth value image zmax of the pixel point in the first depth map and the linear depth zmin of the pixel point in the second depth map can be specifically calculated by the following modes:
zmax= (2.0 x nearest 1 x far1)/(far1+near 1- (depth 1 x 2.0-1.0) (far 1-near 1)); zmin= (2.0 x nearest 2 x far2)/(far2+near 2- (depth 2 x 2.0-1.0) (far 2-near 2)); wherein depth1 is the original depth value of the pixel point in the first depth map; depth2 is the original depth value of the pixel point in the second depth map; near1 is the distance from the nearest pixel that can be seen in the first depth map to the camera; near2 is the distance from the nearest pixel in the second depth map that can be seen to the camera; far1 is the distance from the furthest pixel in the first depth map that can be seen to the camera; far2 is the distance from the camera of the furthest pixel in the second depth map that can be seen.
Step S204, for each pixel point in the depth map, calculating an average depth value of the depth values of the pixel points in the plurality of depth maps;
specifically, for each pixel point in the depth map, an average value of the depth value of the pixel point in the first depth map and the depth value of the pixel point in the second depth map, that is, the average depth value, is calculated.
Step S206, dividing the target image into a plurality of rendering areas according to the error weight and the average depth value of each pixel point in the depth map.
Specifically, the target area may be divided into a plurality of rendering areas according to the size of the error weight, for example, if the error weight of the pixel point in the depth map is greater than a preset threshold value, the image area corresponding to the pixel point may be determined as an edge area; if the error weight of the pixel point in the depth map is smaller than a preset threshold value, the image area of the target image corresponding to the pixel point can be determined to be a non-edge area. In addition, in order to further improve the rendering effect, the non-edge region is further divided into transition regions, for example, if the error weight of the pixel point in the depth map is smaller than a preset threshold value, but on the premise that the average depth value of the pixel point meets a preset condition, the error weight of the pixel point adjacent to the pixel point is greater than the preset threshold value, it may be determined that the image region of the target image corresponding to the pixel point is determined as the transition region.
In the mode, the error weight and the average depth value are calculated through the plurality of depth maps and the initial rendering result corresponding to each depth map, and the target image can be divided into a plurality of rendering areas through the size of the error weight and the average depth value, so that the rendering effect and the rendering efficiency of the target image are improved, and the problem that the mosaic and the shielding relation are incorrect in the edge area of the target image is avoided.
In a specific embodiment, the step of dividing the target image into a plurality of rendering areas according to the error weight and the average depth value of each pixel point in the depth map includes:
(1) For each pixel point in the depth map, determining the error weight of the pixel point as a first pixel value of the pixel; determining a designated pixel point with an average depth value smaller than that of the pixel point from adjacent pixel points of the pixel point, and determining the maximum error weight in the pixel point and the designated pixel point as a second pixel value of the pixel point; wherein the adjacent pixel points include: a current pixel point and a pixel point adjacent to the current pixel point in a horizontal direction and a vertical direction;
in actual implementation, a first rendering target with the same size as the depth map may be generated in advance, and the error weight of each pixel point is determined as the pixel value of the first channel of the pixel point in the first rendering target; meanwhile, determining the average depth value of each pixel point as the pixel value of a second channel of the pixel point in the first rendering target; wherein the first channel may be an R channel and a G channel.
And then, a second rendering target is generated in advance, the second rendering target and the first rendering target have the same size, and the error weight of each pixel point is determined as the first pixel value of the first channel of the pixel point in the second rendering target, namely the first pixel value of the pixel point. Meanwhile, if the adjacent pixel points of the pixel points in the first rendering target comprise appointed pixel points smaller than the average depth value of the pixel points, determining the largest error weight in the pixel points and the first pixel value of the first channel in the appointed pixel points as the second pixel value of the second channel of the pixel points in the second rendering target; the adjacent pixel includes: the pixel point with the difference of the sum of absolute values of the differences of the coordinate values of the current pixel point on the x axis and the y axis being smaller than 1 is actually the four pixel points of the upper, lower, left and right adjacent to the current pixel point, that is, the adjacent pixel points generally comprise four pixel points.
It should be noted that if there is no specified pixel point whose average depth value is smaller than that of the pixel point in the adjacent pixel points of the pixel point, determining the error weight of the pixel point as the second pixel value of the pixel point.
(2) And dividing the target image into a plurality of rendering areas according to the first pixel value and the second pixel value of each pixel point in the depth map.
The first pixel value represents the original error weight of the pixel point, the second pixel value is to perform one-time region expansion on the adjacent pixel point with smaller depth value of the pixel point, and only the second pixel value which meets the preset condition represents the appointed pixel point and the maximum error weight of the pixel point which meet the preset condition. And if the preset condition is not met, the second pixel value represents the original error weight of the pixel point. The preset condition is that a specified pixel point smaller than the average depth value of the pixel point exists in adjacent pixel points of the pixel point in the first rendering target.
Specifically, the target image may be divided into a plurality of rendering regions according to the first pixel value and the second pixel value of each pixel point in the depth map, where the image region of the target image corresponding to the pixel point may be determined as a non-edge region when the first pixel value and the second pixel value are smaller, the image region of the target image corresponding to the pixel point may be determined as an edge region when the first pixel value and the second pixel value are larger, and the image region of the target image corresponding to the pixel point may be determined as a transition region when the first pixel value is smaller and the second pixel value is larger.
In the above manner, the error weight of each pixel is determined as the first pixel value of each pixel; and carrying out region expansion on each pixel point to the pixel point with smaller depth value of the adjacent pixel point of the pixel point, determining the maximum error weight of the appointed pixel point and the pixel point as the second pixel value of the pixel point, and finally carrying out region division on the target image according to the first pixel value and the second pixel value, so that the target image can be more accurately divided into an edge region and a non-edge region and a transition region, and the rendering effect and the rendering efficiency are further improved.
In a specific embodiment, the step of dividing the target image into a plurality of rendering areas according to the first pixel value and the second pixel value of each pixel point in the depth map includes: for each image region in the target image, performing the following operations: if the first pixel value and the second pixel value of the target pixel point corresponding to the image area in the depth map are smaller than a preset threshold value, dividing the image area into a non-edge area; if the first pixel value of the target pixel corresponding to the image area in the depth image is smaller than a preset threshold value and the second pixel value of the target pixel corresponding to the image area in the depth image is larger than the preset threshold value, dividing the image area into transition areas; and if the first pixel value of the target pixel point corresponding to the image region in the depth map is larger than a preset threshold value, dividing the image region into edge regions.
Each image area in the target image corresponds to each pixel point in the depth map. The preset threshold value can be set according to actual conditions and actual application scenes.
In a specific embodiment, in order to further improve the rendering effect of the target image, different rendering modes are adopted for different rendering areas, and the rendering areas are rendered to obtain a rendered target image, which specifically includes:
determining a final rendering result of each pixel point in the target image by adopting a preset rendering mode of each rendering area based on an initial rendering result corresponding to each depth map; and mixing the final rendering result to the corresponding pixel point in the target image to obtain the rendered target image.
Specifically, based on the first depth map, an initial rendering result corresponding to the first depth map and the target image, determining a final rendering result of each pixel point in a non-edge area of the target image by adopting a rendering mode preset in the non-edge area; determining a final rendering result of each pixel point in a transition region of the target image by adopting a preset rendering mode of the transition region based on the second depth map and an initial rendering result and the target image corresponding to the second depth map; and determining a final rendering result of each pixel point in the edge region of the target image by adopting a rendering mode preset in the edge region based on the first depth map, the second depth map and the initial rendering result and the target image corresponding to the first depth map and the second depth map respectively. And finally obtaining the final rendering result of each pixel point in the target image.
In a specific embodiment, the step of determining a final rendering result of each pixel point in the target image by adopting a preset rendering mode of each rendering area based on an initial rendering result corresponding to each depth map includes:
for each pixel point of a non-edge area in the target image, setting the final rendering result of the pixel point as:
Res1=MaxDepthRT*(1-Ad)+MainRT*Ad;
wherein Res1 is the rendering result of the pixel point; maxDepthRT is a pixel value of a target pixel point color channel corresponding to an image area to which the pixel point belongs in an initial rendering result corresponding to the first depth map; ad is the pixel value of a transparency channel of a target pixel point corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the first depth map; mainRT is the pixel value of the color channel of the pixel point in the target image.
For each pixel point of the transition region in the target image, setting the final rendering result of the pixel point as:
Res2=MinDepthRT*(1-Ax)+MainRT*Ax;
wherein Res2 is the rendering result of the pixel point; minDepthRT is the pixel value of the target pixel point color channel corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; ax is the pixel value of the transparency channel of the target pixel point corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; mainRT is the pixel value of the color channel of the pixel point in the target image.
For each pixel point of the edge area in the target image, setting the final rendering result of the pixel point as follows:
Res3=MidDepthRT*(1-Am)+MainRT*Am;
MidDepthRes=lerp(MaxDepthRT,MinDepthRT,d-dx)/(dd-dx));
wherein Res3 is the rendering result of the pixel point; maxDepthRT is a pixel value of a target pixel point color channel corresponding to an image area to which the pixel point belongs in an initial rendering result corresponding to the first depth map; minDepthRT is the pixel value of the target pixel point color channel corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; d is the depth value of the pixel point in the target image; dd is a depth value of a target pixel point corresponding to an image area to which the pixel point belongs in the first depth map; dx is the depth value of the target pixel point corresponding to the image area to which the pixel point belongs in the second depth map; lerp is a linear interpolation function; midDepthRes is the interpolated image; midDepthRT is the color value of the pixel color channel in the interpolation image MidDepthRes; am is the transparency value of the pixel transparency channel in the interpolated image MidDepthRes; mainRT is the pixel value of the color channel of the pixel point in the target image.
In the mode, different rendering modes are adopted for rendering the target image aiming at different rendering areas, so that the special effect drawing can ensure the rendering efficiency under the condition of reducing the filling rate, and can obtain satisfactory rendering effects with fewer flaws, and the problems of mosaic and incorrect shielding relation of areas with discontinuous depth information are solved.
In addition, the rendering method is applied to the client, and the special effect rendering efficiency is improved by 4.5 times through experiments; the method can basically and correctly process the shielding relation between the special effect and the object included in the target image, prevent the occurrence of conditions such as mosaic, incorrect mixing and the like, improve the rendering effect, and can draw out all the special effects only by drawing once, thereby improving the rendering efficiency.
Corresponding to the above method embodiment, an embodiment of the present invention provides a rendering device in a game, as shown in fig. 3, where the device includes:
an acquisition module 31, configured to acquire multiple depth maps of a target image; wherein the target image comprises at least one image area; the depth map comprises depth information corresponding to the image area;
the initial rendering result determining module 32 is configured to draw a preset rendering object based on the depth map, and obtain an initial rendering result of the rendering object corresponding to each depth map;
a rendering region determining module 33, configured to divide the target image into a plurality of rendering regions based on the plurality of depth maps and the initial rendering result corresponding to each depth map; wherein the rendering region includes an edge region, a non-edge region, and a transition region between the edge region and the non-edge region;
The rendering module 34 is configured to render the rendering area by adopting a rendering mode preset for each rendering area based on an initial rendering result of the rendering object corresponding to each depth map, so as to obtain a rendered target image.
The rendering device in the game acquires a plurality of depth maps of the target image; drawing preset rendering objects based on the depth maps to obtain initial rendering results of the rendering objects corresponding to each depth map; dividing a target image into an edge region, a non-edge region and a transition region between the edge region and the non-edge region based on a plurality of depth maps and initial rendering results corresponding to each depth map; and based on an initial rendering result of the rendering object corresponding to each depth map, rendering the rendering area by adopting a rendering mode preset by each rendering area to obtain a rendered target image. In the mode, the depth information among the plurality of depth maps is different, the target image is divided into the edge area, the non-edge area and the transition area according to the plurality of depth maps of the target image and the initial rendering result corresponding to each depth map, and different rendering modes are adopted for rendering according to different areas, so that the problems of incorrect mosaic and shielding relation of the edge area and the transition area are avoided, the rendering effect and the rendering efficiency are improved, and the game experience of a player is further improved.
Further, the above-mentioned acquisition module is further configured to: according to the depth information corresponding to each image area in the target image, carrying out downsampling processing on the target image to obtain a plurality of depth maps; the depth information corresponding to the image region in different depth maps is different for the same image region.
Further, the plurality of depth maps include a first depth map and a second depth map; for the same image area, the depth information corresponding to the image area in the first depth map is the maximum depth value in the image area, and the depth information corresponding to the image area in the second depth map is the minimum depth value in the image area.
Further, the initial rendering result determining module is further configured to: generating rendering targets corresponding to each depth map aiming at a plurality of depth maps; responding to a drawing instruction aiming at a rendering object, and acquiring a depth value, a color value and a transparency value of each pixel point in the rendering object; for each pixel point in each depth map, if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as an initial rendering result of the pixel point corresponding to the depth map; the target transparency value is determined based on a preset mixed mode of the rendering object; and determining an initial rendering result of each pixel point in the depth map as a pixel value of a pixel point corresponding to a rendering target corresponding to the depth map.
Further, the initial rendering result determining module is further configured to: if the preset mixing mode is a multiplication mixing mode, calculating a product value of the transparency value of the pixel point in the rendering object and the transparency value of the pixel point stored in the rendering object; the difference value of the product value subtracted from the transparency value of the pixel point stored in the rendering target is determined as a target transparency value; and if the preset mixed mode is an addition mixed mode, determining the transparency value of the pixel point stored in the rendering target as a target transparency value.
Further, the rendering area determining module is further configured to: calculating the error weight of each pixel point in the depth map based on the plurality of depth maps and the initial rendering result corresponding to each depth map; the error weight is used for indicating the error magnitude of the pixel value of each channel of the pixel point; for each pixel point in the depth map, calculating an average depth value of the depth values of the pixel points in the plurality of depth maps; and dividing the target image into a plurality of rendering areas according to the error weight and the average depth value of each pixel point in the depth map.
Further, the rendering area determining module is further configured to: for each pixel in the depth map, setting the error weight of the pixel as:
A=abs(zmax-zmin)*(maxDepthcolor-inDepthcolor)*(dzmaxX+dzmaxY);
Wherein A is the error weight of the pixel point; abs is the absolute value of "(zmax-zmin), (maxDepthcolor-minDepthcolor), (dzmaxx+dzmaxy)"; zmax is a linear depth value of a pixel point in the first depth map; zmin is the linear depth of the pixel point in the second depth map; the maxsetting color is the color value of the pixel point in the initial rendering result corresponding to the first depth map; the minDepthcolor is the color value of the pixel point in the initial rendering result corresponding to the second depth map; dzmaxX is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the x direction; dzmaxY is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the y direction.
Further, the rendering area determining module is further configured to: for each pixel point in the depth map, determining the error weight of the pixel point as a first pixel value of the pixel; determining a designated pixel point with an average depth value smaller than that of the pixel point from adjacent pixel points of the pixel point, and determining the maximum error weight in the pixel point and the designated pixel point as a second pixel value of the pixel point; wherein the adjacent pixel points include: the current pixel points are adjacent to each other in the horizontal direction and the vertical direction; and dividing the target image into a plurality of rendering areas according to the first pixel value and the second pixel value of each pixel point in the depth map.
Further, the rendering area determining module is further configured to: for each image region in the target image, performing the following operations: if the first pixel value and the second pixel value of the target pixel point corresponding to the image area in the depth map are smaller than a preset threshold value, dividing the image area into a non-edge area; if the first pixel value of the target pixel corresponding to the image area in the depth image is smaller than a preset threshold value and the second pixel value of the target pixel corresponding to the image area in the depth image is larger than the preset threshold value, dividing the image area into transition areas; and if the first pixel value of the target pixel point corresponding to the image region in the depth map is larger than a preset threshold value, dividing the image region into edge regions.
Further, the rendering module is further configured to: determining a final rendering result of each pixel point in the target image by adopting a preset rendering mode of each rendering area based on an initial rendering result corresponding to each depth map; and mixing the final rendering result to the corresponding pixel point in the target image to obtain the rendered target image.
Further, the rendering module is further configured to: for each pixel point of a non-edge area in the target image, setting the final rendering result of the pixel point as: res1=maxdepthrt (1-Ad) +mainrt Ad; wherein Res1 is the rendering result of the pixel point; maxDepthRT is a pixel value of a target pixel point color channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to a first depth map; ad is a pixel value of a transparency channel of a target pixel point corresponding to an image area to which the pixel point belongs in an initial rendering result corresponding to the first depth map; mainRT is the pixel value of the pixel color channel in the target image.
Further, the rendering module is further configured to: for each pixel point of the transition region in the target image, setting the final rendering result of the pixel point as: res2=mindepthrt (1-Ax) +mainrt Ax; wherein Res2 is the rendering result of the pixel point; minDepthRT is the pixel value of the target pixel point color channel corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; ax is the pixel value of the transparency channel of the target pixel point corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; mainRT is the pixel value of the pixel color channel in the target image.
Further, the rendering module is further configured to: for each pixel point of the edge area in the target image, setting the final rendering result of the pixel point as follows: res3=middepthrt (1-Am) +mainrt Am; midDepthRes = lerp (MaxDepthRT, minDepthRT, d-dx)/(dd-dx)); wherein Res3 is the rendering result of the pixel point; maxDepthRT is a pixel value of a target pixel point color channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to a first depth map; minDepthRT is the pixel value of the target pixel point color channel corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; d is the depth value of the pixel point in the target image; dd is a depth value of a target pixel point corresponding to an image area to which the pixel point belongs in the first depth map; dx is the depth value of the target pixel point corresponding to the image area to which the pixel point belongs in the second depth map; lerp is a linear interpolation function; midDepthRes is the interpolated image; midDepthRT is the color value of the pixel color channel in the interpolation image MidDepthRes; am is the transparency value of the pixel transparency channel in the interpolated image MidDepthRes; mainRT is the pixel value of the pixel color channel in the target image.
The in-game rendering device provided by the embodiment of the invention has the same technical characteristics as the in-game rendering method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The present embodiment also provides an electronic device including a processor and a memory storing machine-executable instructions executable by the processor, the processor executing the machine-executable instructions to implement the rendering method in the game described above. The electronic device may be a server or a terminal device.
Referring to fig. 4, the electronic device includes a processor 100 and a memory 101, the memory 101 storing machine executable instructions that can be executed by the processor 100, the processor 100 executing the machine executable instructions to implement the rendering method in the game described above.
Further, the electronic device shown in fig. 4 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The memory 101 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 4, but not only one bus or type of bus.
The processor 100 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 100 or by instructions in the form of software. The processor 100 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and, in combination with its hardware, performs the steps of the method of the previous embodiment.
The present embodiments also provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the above-described rendering method in a game.
The method and apparatus for rendering in a game and the computer program product of the electronic device provided in the embodiments of the present invention include a computer readable storage medium storing program codes, where instructions included in the program codes may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments and will not be repeated herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood by those skilled in the art in specific cases.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention for illustrating the technical solution of the present invention, but not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the foregoing examples, it will be understood by those skilled in the art that the present invention is not limited thereto: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.
Claims (15)
1. A method of in-game rendering, the method comprising:
acquiring a plurality of depth maps of a target image; wherein the target image comprises at least one image area; the depth map comprises depth information corresponding to the image area;
generating rendering targets corresponding to each depth map aiming at the plurality of depth maps;
Responding to a drawing instruction aiming at the rendering object, and acquiring a depth value, a color value and a transparency value of each pixel point in the rendering object;
for each pixel point in each depth map, if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as an initial rendering result of the pixel point corresponding to the depth map; the target transparency value is determined based on a preset mixing mode of the rendering object;
determining an initial rendering result of each pixel point in the depth map as a pixel value of the pixel point corresponding to a rendering target corresponding to the depth map;
dividing the target image into a plurality of rendering areas based on the plurality of depth maps and the initial rendering result corresponding to each depth map; wherein the rendering region includes an edge region, a non-edge region, and a transition region between the edge region and the non-edge region;
and based on an initial rendering result of the rendering object corresponding to each depth map, rendering the rendering area by adopting a preset rendering mode of each rendering area to obtain the rendered target image.
2. The method of claim 1, wherein the step of acquiring a plurality of depth maps of the target image comprises:
performing downsampling processing on the target image according to the depth information corresponding to each image area in the target image to obtain the plurality of depth maps; and aiming at the same image area, the depth information corresponding to the image area in different depth maps is different.
3. The method of claim 1, wherein the plurality of depth maps comprises a first depth map and a second depth map; for the same image area, the depth information corresponding to the image area in the first depth map is the maximum depth value in the image area, and the depth information corresponding to the image area in the second depth map is the minimum depth value in the image area.
4. The method of claim 1, wherein the rendering target has a transparency value for each pixel stored therein; the step of determining the target transparency value based on the preset mixing mode of the rendering object comprises the following steps:
if the preset mixed mode is a multiplication mixed mode, calculating a product value of the transparency value of the pixel point in the rendering object and the transparency value of the pixel point stored in the rendering object;
Subtracting the difference value of the product value from the transparency value of the pixel point stored in the rendering target to determine the target transparency value;
and if the preset mixing mode is an addition mixing mode, determining the transparency value of the pixel point stored in the rendering target as the target transparency value.
5. The method of claim 1, wherein the step of dividing the target image into a plurality of rendering regions based on the plurality of depth maps and the initial rendering result corresponding to each of the depth maps comprises:
calculating the error weight of each pixel point in the depth map based on the plurality of depth maps and the initial rendering result corresponding to each depth map; the error weight is used for indicating the error magnitude of the pixel value of each channel of the pixel point;
for each pixel point in the depth map, calculating an average depth value of the depth values of the pixel points in the plurality of depth maps;
and dividing the target image into a plurality of rendering areas according to the error weight and the average depth value of each pixel point in the depth map.
6. The method of claim 5, wherein the step of calculating the error weight of each pixel in the depth map based on the plurality of depth maps and the initial rendering result corresponding to each depth map comprises:
For each pixel point in the depth map, setting the error weight of the pixel point as follows:
A=abs(zmax-zmin)*(maxDepthcolor-minDepthcolor)*(dzmaxX+dzmaxY);
wherein A is the error weight of the pixel point; abs is the absolute value of the "(zmax-zmin) × (maxDepthcolor-minDepthcolor) × (dzmaxx+dzmaxy)"; zmax is a linear depth value of the pixel point in the first depth map; zmin is the linear depth of the pixel point in the second depth map; the maxsetting color is the color value of the pixel point in the initial rendering result corresponding to the first depth map; the minDepthcolor is the color value of the pixel point in the initial rendering result corresponding to the second depth map; dzmaxX is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the x direction; dzmaxY is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the y direction.
7. The method of claim 5, wherein the step of dividing the target image into a plurality of rendering regions according to the error weight and the average depth value of each pixel in the depth map comprises:
for each pixel point in the depth map, determining the error weight of the pixel point as a first pixel value of the pixel; determining a designated pixel point with an average depth value smaller than that of the pixel point from adjacent pixel points of the pixel point, and determining the maximum error weight in the pixel point and the designated pixel point as a second pixel value of the pixel point; wherein the adjacent pixel points include: the current pixel points are adjacent to each other in the horizontal direction and the vertical direction;
And dividing the target image into a plurality of rendering areas according to the first pixel value and the second pixel value of each pixel point in the depth map.
8. The method of claim 7, wherein the step of dividing the target image into a plurality of rendering regions based on the first pixel value and the second pixel value of each pixel point in the depth map comprises:
for each image region in the target image, performing the following operations:
if the first pixel value and the second pixel value of the target pixel point corresponding to the image area in the depth map are smaller than a preset threshold value, dividing the image area into the non-edge area;
if the first pixel value of the target pixel corresponding to the image area in the depth map is smaller than the preset threshold value and the second pixel value of the target pixel corresponding to the image area in the depth map is larger than the preset threshold value, dividing the image area into the transition area;
and if the first pixel value of the target pixel point corresponding to the image area in the depth map is larger than the preset threshold value, dividing the image area into the edge area.
9. The method according to claim 1, wherein the step of rendering the rendering area by adopting a preset rendering mode of each rendering area based on an initial rendering result of the rendering object corresponding to each depth map to obtain the rendered target image comprises the following steps:
Based on the initial rendering result corresponding to each depth map, determining a final rendering result of each pixel point in the target image by adopting a rendering mode preset by each rendering area;
and mixing the final rendering result to the corresponding pixel point in the target image to obtain the rendered target image.
10. The method according to claim 9, wherein the step of determining the final rendering result of each pixel point in the target image by adopting a preset rendering mode of each rendering area based on the initial rendering result corresponding to each depth map includes:
for each pixel point of the non-edge area in the target image, setting the final rendering result of the pixel point as:
Res1=MaxDepthRT*(1-Ad)+MainRT*Ad;
wherein Res1 is a rendering result of the pixel point; maxDepthRT is a pixel value of a target pixel color channel corresponding to an image area to which the pixel belongs in an initial rendering result corresponding to the first depth map; ad is a pixel value of a target pixel transparency channel corresponding to an image area to which the pixel belongs in an initial rendering result corresponding to the first depth map; mainRT is the pixel value of the pixel point color channel in the target image.
11. The method according to claim 9, wherein the step of determining the final rendering result of each pixel point in the target image by adopting a preset rendering mode of each rendering area based on the initial rendering result corresponding to each depth map includes:
for each pixel point of the transition region in the target image, setting the final rendering result of the pixel point as:
Res2=MinDepthRT*(1-Ax)+MainRT*Ax;
wherein Res2 is the rendering result of the pixel point; minDepthRT is the pixel value of the target pixel point color channel corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; ax is the pixel value of the transparency channel of the target pixel point corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; mainRT is the pixel value of the pixel point color channel in the target image.
12. The method according to claim 9, wherein the step of determining the final rendering result of each pixel point in the target image by adopting a preset rendering mode of each rendering area based on the initial rendering result corresponding to each depth map includes:
For each pixel point of the edge area in the target image, setting the final rendering result of the pixel point as follows:
Res3=MidDepthRT*(1-Am)+MainRT*Am;
MidDepthRes=lerp(MaxDepthRT,MinDepthRT,d-dx)/(dd-dx));
wherein Res3 is a rendering result of the pixel point; maxDepthRT is a pixel value of a target pixel color channel corresponding to an image area to which the pixel belongs in an initial rendering result corresponding to the first depth map; minDepthRT is the pixel value of the target pixel point color channel corresponding to the image area to which the pixel point belongs in the initial rendering result corresponding to the second depth map; d is the depth value of the pixel point in the target image; dd is a depth value of a target pixel point corresponding to an image area to which the pixel point belongs in the first depth map; dx is the depth value of the target pixel point corresponding to the image area to which the pixel point belongs in the second depth map; lerp is a linear interpolation function; midDepthRes is the interpolated image; midDepthRT is the color value of the pixel color channel in the interpolation image MidDepthRes; am is the transparency value of the pixel transparency channel in the interpolated image MidDepthRes; mainRT is the pixel value of the pixel point color channel in the target image.
13. A rendering device in a game, the device comprising:
The acquisition module is used for acquiring a plurality of depth maps of the target image; wherein the target image comprises at least one image area; the depth map comprises depth information corresponding to the image area;
the initial rendering result determining module is used for generating rendering targets corresponding to each depth map aiming at the plurality of depth maps; responding to a drawing instruction aiming at the rendering object, and acquiring a depth value, a color value and a transparency value of each pixel point in the rendering object; for each pixel point in each depth map, if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as an initial rendering result of the pixel point corresponding to the depth map; the target transparency value is determined based on a preset mixing mode of the rendering object; determining an initial rendering result of each pixel point in the depth map as a pixel value of the pixel point corresponding to a rendering target corresponding to the depth map;
the rendering area determining module is used for dividing the target image into a plurality of rendering areas based on the plurality of depth maps and the initial rendering result corresponding to each depth map; wherein the rendering region includes an edge region, a non-edge region, and a transition region between the edge region and the non-edge region;
And the rendering module is used for rendering the rendering area by adopting a preset rendering mode of each rendering area based on an initial rendering result of the rendering object corresponding to each depth map, so as to obtain the rendered target image.
14. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the in-game rendering method of any one of claims 1-12.
15. A machine-readable storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the in-game rendering method of any one of claims 1-12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111076331.XA CN113781620B (en) | 2021-09-14 | 2021-09-14 | Rendering method and device in game and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111076331.XA CN113781620B (en) | 2021-09-14 | 2021-09-14 | Rendering method and device in game and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113781620A CN113781620A (en) | 2021-12-10 |
CN113781620B true CN113781620B (en) | 2023-06-30 |
Family
ID=78843872
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111076331.XA Active CN113781620B (en) | 2021-09-14 | 2021-09-14 | Rendering method and device in game and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113781620B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115311397A (en) * | 2022-08-09 | 2022-11-08 | 北京字跳网络技术有限公司 | Method, apparatus, device and storage medium for image rendering |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708585A (en) * | 2012-05-09 | 2012-10-03 | 北京像素软件科技股份有限公司 | Method for rendering contour edges of models |
CN104794699A (en) * | 2015-05-08 | 2015-07-22 | 四川天上友嘉网络科技有限公司 | Image rendering method applied to games |
CN109767466A (en) * | 2019-01-10 | 2019-05-17 | 深圳看到科技有限公司 | Picture rendering method, device, terminal and corresponding storage medium |
CN112316424A (en) * | 2021-01-06 | 2021-02-05 | 腾讯科技(深圳)有限公司 | Game data processing method, device and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5255789B2 (en) * | 2007-06-27 | 2013-08-07 | 任天堂株式会社 | Image processing program and image processing apparatus |
US9613453B2 (en) * | 2011-10-04 | 2017-04-04 | Google Inc. | Systems and method for performing a three pass rendering of images |
-
2021
- 2021-09-14 CN CN202111076331.XA patent/CN113781620B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708585A (en) * | 2012-05-09 | 2012-10-03 | 北京像素软件科技股份有限公司 | Method for rendering contour edges of models |
CN104794699A (en) * | 2015-05-08 | 2015-07-22 | 四川天上友嘉网络科技有限公司 | Image rendering method applied to games |
CN109767466A (en) * | 2019-01-10 | 2019-05-17 | 深圳看到科技有限公司 | Picture rendering method, device, terminal and corresponding storage medium |
CN112316424A (en) * | 2021-01-06 | 2021-02-05 | 腾讯科技(深圳)有限公司 | Game data processing method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113781620A (en) | 2021-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6982723B1 (en) | Method and apparatus for eliminating unwanted steps at edges in graphic representations in the line raster | |
TWI584223B (en) | Method and system of graphics processing enhancement by tracking object and/or primitive identifiers,graphics processing unit and non-transitory computer readable medium | |
US7280121B2 (en) | Image processing apparatus and method of same | |
US7453459B2 (en) | Composite rendering 3-D graphical objects | |
US6961065B2 (en) | Image processor, components thereof, and rendering method | |
US7532222B2 (en) | Anti-aliasing content using opacity blending | |
US6411294B1 (en) | Image display apparatus and image display method | |
US20140118351A1 (en) | System, method, and computer program product for inputting modified coverage data into a pixel shader | |
KR102637736B1 (en) | Graphics processing method and system | |
CN111260750B (en) | Processing method and device for openFL drawing vector graphics and electronic equipment | |
US20190385352A1 (en) | Apparatus and method for generating a light intensity image | |
KR20160031328A (en) | Method and apparatus for redndering | |
CN113781620B (en) | Rendering method and device in game and electronic equipment | |
CN111080762B (en) | Virtual model rendering method and device | |
KR100723421B1 (en) | Method for rendering by point interpolation, apparatus for rendering by point interpolation and readable medium | |
US9607390B2 (en) | Rasterization in graphics processing system | |
US11783527B2 (en) | Apparatus and method for generating a light intensity image | |
CN108421256B (en) | Object position display method and device | |
CN116688492A (en) | Special effect rendering method and device for virtual model, electronic equipment and storage medium | |
WO2022126145A1 (en) | Hybrid shadow rendering | |
US6545675B1 (en) | Three-dimensional graphics system, processor and recording medium | |
CN116670719A (en) | Graphic processing method and device and electronic equipment | |
CN117876274B (en) | Method, apparatus, computing device and computer readable storage medium for processing image | |
CN110384926B (en) | Position determining method and device | |
CN117218270A (en) | Rendering method and device of transition region, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |