CN113781620A - Rendering method and device in game and electronic equipment - Google Patents

Rendering method and device in game and electronic equipment Download PDF

Info

Publication number
CN113781620A
CN113781620A CN202111076331.XA CN202111076331A CN113781620A CN 113781620 A CN113781620 A CN 113781620A CN 202111076331 A CN202111076331 A CN 202111076331A CN 113781620 A CN113781620 A CN 113781620A
Authority
CN
China
Prior art keywords
rendering
pixel point
value
depth map
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111076331.XA
Other languages
Chinese (zh)
Other versions
CN113781620B (en
Inventor
张启平
秦斌斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111076331.XA priority Critical patent/CN113781620B/en
Publication of CN113781620A publication Critical patent/CN113781620A/en
Application granted granted Critical
Publication of CN113781620B publication Critical patent/CN113781620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a rendering method, a rendering device and electronic equipment in a game, which are used for obtaining a plurality of depth maps of a target image and determining an initial rendering result of a rendering object corresponding to each depth map; further dividing the target image into an edge region, a non-edge region and a transition region; and rendering the rendering area by adopting a preset rendering mode of each rendering area based on the initial rendering result of the rendering object corresponding to each depth map to obtain a rendered target image. In the method, the target image is divided into the edge area, the non-edge area and the transition area through the multiple depth maps of the target image and the initial rendering result corresponding to each depth map, different rendering modes are adopted for rendering aiming at different areas, the problems that mosaics appear in the edge area and the transition area and the shielding relation is incorrect are avoided, the rendering effect and the rendering efficiency are improved, and the game experience of a player is further improved.

Description

Rendering method and device in game and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to a rendering method and device in a game and electronic equipment.
Background
In a game, adding special effects to a target object is a very common rendering mode, and in order to achieve a better game effect, the number of tracks of the special effects and the occupied area of each track during rendering are higher and higher, so that the filling rate of the special effects is higher and higher, and the rendering efficiency of the special effects of the game is reduced. In the related art, rendering is generally performed by adopting a method of reducing special effect resolution, a special effect is drawn in a depth map subjected to downsampling, and a target image is rendered; however, the special effect and the target image have an interpenetration relationship, so that mosaic is generated when the special effect drawn by reducing the resolution is mixed with the target object, the shielding relationship is incorrect and the like, the rendering effect and the game quality are reduced, and the game experience of a player is influenced.
Disclosure of Invention
In view of this, the present invention provides a rendering method, a rendering device and an electronic device in a game, so as to improve a rendering effect and game quality, and further improve game experience of a player.
In a first aspect, an embodiment of the present invention provides a rendering method in a game, where the method includes: acquiring a plurality of depth maps of a target image; wherein the target image comprises at least one image area; the depth map comprises depth information corresponding to the image area; drawing a preset rendering object based on the depth map to obtain an initial rendering result of the rendering object corresponding to each depth map; dividing the target image into a plurality of rendering areas based on the plurality of depth maps and an initial rendering result corresponding to each depth map; the rendering area comprises an edge area, a non-edge area and a transition area between the edge area and the non-edge area; and rendering the rendering area by adopting a preset rendering mode of each rendering area based on the initial rendering result of the rendering object corresponding to each depth map to obtain a rendered target image.
Further, the step of obtaining a plurality of depth maps of the target image includes: according to the depth information corresponding to each image area in the target image, performing down-sampling processing on the target image to obtain a plurality of depth maps; and aiming at the same image area, the depth information corresponding to the image area in the different-depth map is different.
Further, the plurality of depth maps include a first depth map and a second depth map; for the same image area, the depth information corresponding to the image area in the first depth map is the maximum depth value in the image area, and the depth information corresponding to the image area in the second depth map is the minimum depth value in the image area.
Further, the step of drawing a preset rendering object based on the depth map to obtain an initial rendering result of the rendering object corresponding to each depth map includes: aiming at a plurality of depth maps, generating a rendering target corresponding to each depth map; responding to a drawing instruction aiming at the rendering object, and acquiring a depth value, a color value and a transparency value of each pixel point in the rendering object; for each pixel point in each depth map, if the depth value of the pixel point in the rendering object is smaller than that of the pixel point in the depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as an initial rendering result of the pixel point corresponding to the depth map; wherein the target transparency value is determined based on a preset mixing mode of the rendering object; and determining the initial rendering result of each pixel point in the depth map as the pixel value of the pixel point corresponding to the rendering target corresponding to the depth map.
Further, the transparency value of each pixel point is stored in the rendering target; the step of determining a target transparency value based on a preset blending mode of the rendering object includes: if the preset mixed mode is a multiplication mixed mode, calculating a product value of the transparency value of the pixel point in the rendering object and the transparency value of the pixel point stored in the rendering target; subtracting the difference value of the product value from the transparency value of the pixel point stored in the render target to determine the transparency value as the target transparency value; and if the preset mixed mode is the addition mixed mode, determining the transparency value of the pixel point stored in the rendering target as the target transparency value.
Further, the step of dividing the target image into a plurality of rendering regions based on the plurality of depth maps and the initial rendering result corresponding to each depth map includes: calculating the error weight of each pixel point in the depth map based on the multiple depth maps and the initial rendering result corresponding to each depth map; the error weight is used for indicating the error magnitude of the pixel value of each channel of the pixel point; aiming at each pixel point in the depth maps, calculating the average depth value of the depth values of the pixel points in the depth maps; and dividing the target image into a plurality of rendering areas according to the error weight and the average depth value of each pixel point in the depth map.
Further, based on the multiple depth maps and the initial rendering result corresponding to each depth map, the step of calculating the error weight of each pixel point in the depth map includes: aiming at each pixel point in the depth map, setting the error weight of the pixel point as:
A=abs(zmax-zmin)*(maxDepthcolor-inDepthcolor)*(dzmaxX+dzmaxY);
wherein, A is the error weight of the pixel point; abs is the absolute value of "(zmax-zmin) ((maxDepthcolor) -minDepthcolor) ((dzmaxX + dzmaxY)); zmax is a linear depth value of a pixel point in the first depth map; zmin is the linear depth of the pixel point in the second depth map; maxdeptholor is a color value of a pixel point in an initial rendering result corresponding to the first depth map; minDepthcolor is the color value of a pixel point in the initial rendering result corresponding to the second depth map; dzmaxX is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the x direction; and dzmaxY is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the y direction.
Further, the step of dividing the target image into a plurality of rendering areas according to the error weight and the average depth value of each pixel point in the depth map includes: aiming at each pixel point in the depth map, determining the error weight of the pixel point as a first pixel value of the pixel; determining a designated pixel point with an average depth value smaller than that of the pixel point from adjacent pixel points of the pixel point, and determining the maximum error weight between the pixel point and the designated pixel point as a second pixel value of the pixel point; wherein, the adjacent pixel point includes: the method comprises the following steps that a current pixel point and pixel points adjacent to the current pixel point in the horizontal direction and the vertical direction are obtained; and dividing the target image into a plurality of rendering areas according to the first pixel value and the second pixel value of each pixel point in the depth map.
Further, the step of dividing the target image into a plurality of rendering regions according to the first pixel value and the second pixel value of each pixel point in the depth map includes: for each image region in the target image, performing the following operations: if the first pixel value and the second pixel value of a target pixel point corresponding to the image area in the depth map are both smaller than a preset threshold value, dividing the image area into non-edge areas; if the first pixel value of the target pixel point corresponding to the image area in the depth map is smaller than a preset threshold value and the second pixel value of the target pixel point corresponding to the image area in the depth map is larger than the preset threshold value, dividing the image area into transition areas; and if the first pixel value of the target pixel point corresponding to the image area in the depth map is larger than a preset threshold value, dividing the image area into edge areas.
Further, based on the initial rendering result of the rendering object corresponding to each depth map, rendering the rendering area by adopting a preset rendering mode of each rendering area to obtain a rendered target image, including: determining a final rendering result of each pixel point in the target image by adopting a rendering mode preset in each rendering area based on an initial rendering result corresponding to each depth map; and mixing the final rendering result to the corresponding pixel point in the target image to obtain the rendered target image.
Further, based on the initial rendering result corresponding to each depth map, a step of determining a final rendering result of each pixel point in the target image by using a preset rendering mode of each rendering area includes: aiming at each pixel point of a non-edge area in a target image, setting the final rendering result of the pixel point as: res1 ═ maxdeptrt (1-Ad) + MainRT Ad; res1 is the rendering result of the pixel point; the MaxDepthRT is the pixel value of a target pixel point color channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to the first depth map; ad is a pixel value of a target pixel transparency channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to the first depth map; MainRT is the pixel value of the color channel of the pixel point in the target image.
Further, based on the initial rendering result corresponding to each depth map, a step of determining a final rendering result of each pixel point in the target image by using a preset rendering mode of each rendering area includes: aiming at each pixel point in the transition region in the target image, setting the final rendering result of the pixel point as: res2 ═ MinDepthRT (1-Ax) + MainRT Ax; res2 is the rendering result of the pixel point; MinDepthRT is the pixel value of a target pixel color channel corresponding to the image area to which the pixel belongs in the initial rendering result corresponding to the second depth map; ax is a pixel value of a target pixel transparency channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to the second depth map; MainRT is the pixel value of the color channel of the pixel point in the target image.
Further, based on the initial rendering result corresponding to each depth map, a step of determining a final rendering result of each pixel point in the target image by using a preset rendering mode of each rendering area includes: aiming at each pixel point of the edge area in the target image, setting the final rendering result of the pixel point as: res3 ═ MidDepthRT (1-Am) + MainRT ·am; MidDepthRes ═ lerp (maxdepthtrt, MinDepthRT, d-dx)/(dd-dx)); res3 is the rendering result of the pixel point; the MaxDepthRT is the pixel value of a target pixel point color channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to the first depth map; MinDepthRT is the pixel value of a target pixel color channel corresponding to the image area to which the pixel belongs in the initial rendering result corresponding to the second depth map; d is the depth value of the pixel point in the target image; dd is the depth value of a target pixel point corresponding to the image area to which the pixel point belongs in the first depth map; dx is the depth value of a target pixel point corresponding to the image area to which the pixel point belongs in the second depth map; lerp is a linear interpolation function; MidDepthRes is an interpolated image; MidDepthRT is the color value of a color channel of a pixel point in the interpolation image MidDepthRes; am is a transparency value of a pixel transparency channel in the interpolated image MidDepthRes; MainRT is the pixel value of the color channel of the pixel point in the target image.
In a second aspect, an embodiment of the present invention provides an apparatus for rendering in a game, where the apparatus includes: the acquisition module is used for acquiring a plurality of depth maps of the target image; wherein the target image comprises at least one image area; the depth map comprises depth information corresponding to the image area; for the same image area, the depth information corresponding to the image area in different depth maps is different; the initial rendering result determining module is used for drawing preset rendering objects based on the depth maps to obtain an initial rendering result of the rendering object corresponding to each depth map; the rendering area determining module is used for dividing the target image into a plurality of rendering areas based on the plurality of depth maps and the initial rendering result corresponding to each depth map; the rendering area comprises an edge area, a non-edge area and a transition area between the edge area and the non-edge area; and the rendering module is used for rendering the rendering areas by adopting a preset rendering mode of each rendering area based on the initial rendering result of the rendering object corresponding to each depth map to obtain a rendered target image.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the rendering method in the game in any one of the first aspect.
In a fourth aspect, embodiments of the invention provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the in-game rendering method of any of the first aspects.
The embodiment of the invention has the following beneficial effects:
the invention provides a rendering method, a rendering device and electronic equipment in a game, which are used for acquiring a plurality of depth maps of a target image; drawing a preset rendering object based on the depth map to obtain an initial rendering result of the rendering object corresponding to each depth map; dividing the target image into an edge area, a non-edge area and a transition area between the edge area and the non-edge area based on the multiple depth maps and an initial rendering result corresponding to each depth map; and rendering the rendering area by adopting a preset rendering mode of each rendering area based on the initial rendering result of the rendering object corresponding to each depth map to obtain a rendered target image. In the method, the target image is divided into the edge area, the non-edge area and the transition area through the multiple depth maps of the target image and the initial rendering result corresponding to each depth map, different rendering modes are adopted for rendering aiming at different areas, the problems that mosaics appear in the edge area and the transition area and the shielding relation is incorrect are avoided, the rendering effect and the rendering efficiency are improved, and the game experience of a player is further improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a rendering method in a game according to an embodiment of the present invention;
FIG. 2 is a flow chart of another rendering method in a game according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a rendering apparatus in a game according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, adding special effects to target objects in games is a very common rendering mode, and with the improvement of game quality, in order to achieve a better game effect, the number of tracks of the special effects and the occupied area of each track during rendering are higher and higher, so that the filling rate of the special effects is higher and higher, and the rendering efficiency of the special effects of the games is reduced. Due to the characteristics of the special effect rendering, particularly the common particles in the special effect, the resolution of the special effect is reduced for rendering, and the influence on the game experience of a player is small in most cases.
The current methods for solving the above problems mainly include the following two types:
firstly, down-sampling a target image, generating an 1/4-resolution maximum depth map, rendering a special effect at a resolution of 1/4, and recording expectation and variance of the position of the special effect on a rendering target during rendering; and performing interpolation mixing on the special effect and the target image according to normal distribution according to the position and variance information recorded on the rendering target corresponding to the maximum depth map with the 1/4 resolution. In this way, when a special effect is drawn on a rendering target, it is necessary to assume that the special effect conforms to a normal distribution. The assumption has better effect when the variance of the special effect is smaller or the special effect and other objects of the same pixel point on the target image are not interpenetrated. However, when the distribution variance of the special effect is large and the special effect is inserted into other objects of the same pixel point on the target image, the assumed error is large, so that when the special effect is mixed into the target image, the rendering result is incorrect.
Firstly, rendering a scene by adopting a conventional method, simultaneously writing the depth value of a target image into a cache region to obtain a depth cache, and performing down-sampling on the depth cache to a low-resolution rendering target; rendering the particles into an off-screen rendering object, and performing depth test with a depth cache; and performing up-sampling on the rendering target, and adding the rendering object into the up-sampled rendering target to obtain a rendered image. Because the mosaic problem can be caused by the up-sampling, areas with discontinuous depth values are extracted by adopting an edge detection method, and the special effect is rendered again in the areas by using the full resolution. This kind of mode, because in the marginal zone, draw the special effect of the discontinuous region of depth value through the mode of drawing the special effect again and draw the exact special effect, can increase and render and transfer the quantity of drawing the instruction, and then influence the efficiency and the performance of rendering, lead to rendering efficiency's promotion effect not good enough.
Based on the above problem, the rendering method and device in the game and the electronic device provided by the embodiments of the present invention can be applied to a game with a rendered scene.
To facilitate understanding of the embodiment, a rendering method in a game disclosed by the embodiment of the present invention is first described in detail, and as shown in fig. 1, the method includes the following steps:
step S102, acquiring a plurality of depth maps of a target image; wherein the target image comprises at least one image area; the depth map comprises depth information corresponding to the image area;
the target image is generally an image of a specified object in a game scene or an image of a specified area, and the target image generally includes a game character controlled by a player, a game character not controlled by the player, props in the game scene, buildings, environments, trees, roads and other elements. The sizes of the image regions included in the target image are the same, and each image region may include a plurality of pixel points in the target image, such as 4, 9, 16, and the like; the number of specific pixel points can be set according to the size of the target image and actual needs, and each image area is not overlapped with each other.
The depth information corresponding to the image area may be a maximum depth value, a minimum depth value, or an average depth value of all depth values in the image area. For example, if the plurality of depth maps include three depth maps, the depth information corresponding to the image area in the first depth map may be the maximum depth value, the depth information corresponding to the image area in the second depth map may be the minimum depth value, and the depth information corresponding to the image area in the third depth map may be the average depth value for the same image area.
For example, it is required to change the target image size of 1920 × 1080 resolution into the original 1/4 two depth maps, i.e. 480 × 270 resolution depth map, where multiple image regions of the target image include 16 pixels; for the first depth map, the depth value of each pixel point in the depth map is the maximum depth value in the corresponding image area (i.e. 16 pixel points). For the second depth map, the depth value of each pixel in the depth map is the minimum depth value in the corresponding image area (i.e. 16 pixels). In general, the step of obtaining multiple depth maps of the target image may be understood as a process of down-sampling the depth images of the target image, and for each sampling region (i.e., image region) of the target image, a maximum depth value, a minimum depth value, or an average depth value of the sampling region is obtained, so as to obtain multiple depth maps.
Step S104, drawing preset rendering objects based on the depth maps to obtain an initial rendering result of the rendering object corresponding to each depth map;
the preset rendering object generally refers to a special effect that the target image needs to be drawn and rendered, and the preset rendering object generally refers to a game special effect, and generally refers to a preset special effect image. Usually, the special effects which are not usually appeared in reality are pre-manufactured by computer software. The size of the rendering object may be the same as that of the depth map, and the color value, the transparency value, and the depth value of each pixel point are stored in the rendering object. The initial rendering result generally includes a target color value and a target transparency value of each pixel. The transparency value may also be referred to as an alpha value.
In a specific embodiment, in order to improve the drawing efficiency, in the same drawing instruction, a preset rendering object may be drawn for multiple depth maps at the same time, so as to obtain a rendering object corresponding to each depth map. The rendering object can be drawn based on different depth maps by respectively adopting a display card and a pixel shader, specifically, the display card can be adopted to carry out depth test according to the depth value of each pixel point in the depth map and the depth value of each pixel point in the rendering object, and the depth map is cut according to the rendering object to obtain an initial rendering result of the rendering object corresponding to the depth map; and simultaneously, performing depth test by adopting a pixel shader according to the depth value of each pixel point in the depth map and the depth value of each pixel point in the rendering object to obtain an initial rendering result of the pixel point passing the test. The initial rendering result of the rendering object corresponding to each depth map is also typically written to the rendering buffer.
In general, the step of obtaining an initial rendering result of the rendering object corresponding to each depth map may be understood as off-screen rendering, so as to apply the initial rendering result to the rendered target image before displaying the final rendered target image.
Step S106, dividing the target image into a plurality of rendering areas based on the plurality of depth maps and the initial rendering result corresponding to each depth map; the rendering area comprises an edge area, a non-edge area and a transition area between the edge area and the non-edge area;
because there is a region with discontinuous depth values between the object in the target image and the rendering object, or a region with a penetration relationship between a special effect and a non-special effect, if the region is rendered in a traditional way, the region may generate problems of mosaic, incorrect occlusion relationship, and the like, and therefore, the target image may be divided into an edge region, a non-edge region, and a transition region between the edge region and the non-edge region according to the obtained multiple depth maps and the initial rendering result corresponding to each depth map. Wherein divide the transition region, then render to this transition region alone, can further improve the rendering effect, make the rendering effect more lifelike, avoid appearing the flaw.
For each pixel point in the depth map, the rendering region to which the pixel point belongs can be determined according to the depth value of the pixel point in each depth map and the color value of the pixel point in the initial rendering result corresponding to each depth map. In actual implementation, for each pixel point in the depth maps, an absolute value of a difference value of depth values of the pixel point in the multiple depth maps can be calculated, an absolute value of a difference value of color values of the pixel point in initial rendering results corresponding to the multiple depth maps is calculated, a rendering area to which the pixel point belongs is determined according to a product of the absolute values of the difference values, and the product is generally used for indicating an error of the pixel value of each channel of the pixel point; generally, the larger the product, the closer the pixel point is to the edge region, and the smaller the product, the closer the pixel point is to the non-edge region. The transition area can be determined according to the depth values of the pixel points in the multiple depth maps and the product size of the pixel points, and usually, when the depth value of the pixel point is larger than the depth values of the adjacent pixel points and the error of the adjacent pixel point of the pixel point is larger, the pixel point is in the transition area.
The image area is divided according to the positions of the pixel points of the target image, specifically, the pixel points of a certain area can be divided into one image area, the target image can be divided into at least one image area, that is, all the pixel points in the target image are divided into one image area, 16 pixel points in the image area can also be divided into 4 image areas, and each image area comprises 4 pixel points which are adjacent to each other. Each pixel point includes a variety of pixel information such as depth information, color information, and the like. The pixel information of each pixel point in the depth map only includes depth information of an image region, specifically, depth information of one target pixel point in the image region. The target image may be composed of a plurality of rendering regions, each rendering region includes at least one pixel point, and generally, the shapes of the rendering regions are various and unpredictable, and are divided according to the step S106. But the pixel information of each pixel point in the rendering area is the same as the pixel information of each pixel point in the target image.
And S108, rendering the rendering area by adopting a preset rendering mode of each rendering area based on the initial rendering result of the rendering object corresponding to each depth map to obtain a rendered target image.
And calculating final rendering results of the pixel points in the edge area, the non-edge area and the transition area by adopting different rendering modes respectively, and then mixing the final rendering result of each pixel point to the target image to obtain a rendered target image. The rendering modes include multiple types, specifically including a first rendering mode corresponding to the edge region, a second rendering mode corresponding to the non-edge region, and a third rendering mode corresponding to the transition region. The first rendering mode is mainly to mix the depth values of the multiple depth maps with the initial rendering result corresponding to each depth map and the depth value and color value of the target image according to a preset proportion to obtain the final rendering result of each pixel point in the edge area. The second rendering mode is mainly to mix the depth value of the maximum depth map with the initial rendering result corresponding to the maximum depth map and the pixel value of the target image to obtain the final rendering result of each pixel point in the non-edge region. And the third rendering mode is mainly to mix the depth value of the minimum depth map with the initial rendering result corresponding to the minimum depth map and the pixel value of the target image to obtain the final rendering result of each pixel point in the transition region.
Specifically, for the non-edge region and the transition region, the final rendering result of each pixel point in the non-edge region can be directly calculated according to the initial rendering result corresponding to the depth map and the color value of the target image. When the final rendering result of each pixel point is calculated in the non-edge area and the transition area, the initial rendering results corresponding to different depth maps are adopted; the depth map used for the non-edge regions typically has larger depth values than the depth map used for the transition regions. For the edge region, the final rendering result of each pixel point in the edge region may be calculated according to the depth values of the multiple depth maps, the initial rendering result corresponding to each depth map, and the depth value and the color value of the target image.
The rendering method in the game obtains a plurality of depth maps of the target image; drawing a preset rendering object based on the depth map to obtain an initial rendering result of the rendering object corresponding to each depth map; dividing the target image into an edge area, a non-edge area and a transition area between the edge area and the non-edge area based on the multiple depth maps and an initial rendering result corresponding to each depth map; and rendering the rendering area by adopting a preset rendering mode of each rendering area based on the initial rendering result of the rendering object corresponding to each depth map to obtain a rendered target image. In the method, the depth information among the depth maps is different, the target image is divided into an edge area, a non-edge area and a transition area through the depth maps of the target image and the initial rendering result corresponding to each depth map, different rendering modes are adopted for rendering aiming at different areas, the problems that mosaics appear in the edge area and the transition area and the shielding relation is incorrect are avoided, the rendering effect and the rendering efficiency are improved, and the game experience of a player is further improved.
The following describes the steps of obtaining multiple depth maps of a target image, including: according to the depth information corresponding to each image area in the target image, performing down-sampling processing on the target image to obtain a plurality of depth maps; and aiming at the same image area, the depth information corresponding to the image area in different depth maps is different.
Specifically, a pixel point with the maximum depth value (i.e., the depth information) may be extracted from each image region in the target image, and each extracted pixel point is combined into a map to obtain a depth map, where each pixel point in the depth map is a pixel point with the maximum depth value in the corresponding image region; the method can also extract a pixel point with the minimum depth value in each image area in the target image, and form each extracted pixel point into a graph to obtain a depth map, wherein each pixel point in the depth map is the pixel point with the minimum depth value in the corresponding image area; of course, a pixel point with a median depth value can be extracted from each image area in the target image, and each extracted pixel point is combined into a map to obtain a depth map.
In order to improve the rendering effect of the target image, more accurately divide the rendering area, and draw the rendering result, the depth information of the target image may be used to distinguish the edge area of the target image, or the interpenetration area of a special effect and a non-special effect, so that the multiple depth maps include a first depth map and a second depth map; for the same image area, the depth information corresponding to the image area in the first depth map is the maximum depth value in the image area, and the depth information corresponding to the image area in the second depth map is the minimum depth value in the image area. The first depth map may be referred to as a maximum depth map and the second depth map may be referred to as a minimum depth map.
In actual implementation, the depth image of the target image is subjected to down-sampling, the maximum depth value corresponding to each image area is obtained for each image area, and the maximum depth value is determined as the depth value of a pixel point corresponding to the image area in the first depth map; similarly, for each image area, the minimum depth value corresponding to the image area is obtained, and the minimum depth value is determined as the depth value of the pixel point corresponding to the image area in the second depth map.
In the above manner, by acquiring the first depth map and the second depth map, richer depth information in the target image can be obtained, and then based on the depth values in the first depth map and the second depth map, richer and more detailed initial rendering results and a more accurate rendering area can be acquired.
In a possible embodiment, in order to improve the rendering effect and rendering efficiency of the target image, the following describes a process of determining an initial rendering result, which specifically includes:
(1) aiming at a plurality of depth maps, generating a rendering target corresponding to each depth map;
the rendering target is usually a buffer area, and is mainly used for recording an initial rendering result output after rendering, and the special effect cannot be directly drawn to a target image and displayed on a screen; specifically, the rendering target corresponding to the first depth map may be referred to as a first rendering target, and the rendering target corresponding to the second depth map may be referred to as a second rendering target. In addition, the rendering target stores the transparency value of each pixel point.
(2) Responding to a drawing instruction aiming at the rendering object, and acquiring a depth value, a color value and a transparency value of each pixel point in the rendering object;
the rendering object is preset, and after a drawing instruction for the rendering object is called, the depth value, the color value and the transparency value of each pixel point in the rendering object can be acquired firstly. In actual implementation, after a drawing instruction for the rendering object is called, for each pixel point in each depth map, the depth value, the color value and the transparency value of the pixel point in the rendering object are obtained.
(3) For each pixel point in each depth map, if the depth value of the pixel point in the rendering object is smaller than that of the pixel point in the depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as an initial rendering result of the pixel point corresponding to the depth map; wherein the target transparency value is determined based on a preset mixing mode of the rendering object;
if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the depth map, the rendering object is not shielded by the object in the depth map and is finally displayed in the rendered target image; on the contrary, if the depth value of the pixel point in the rendering object is greater than the depth value of the pixel point in the depth map, it indicates that the rendering object is shielded by the object in the depth map and will not be displayed in the rendered target image. Therefore, only when the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the depth map, the color value of the pixel point in the rendering object and the target transparency value of the pixel point are determined as the initial rendering result of the pixel point.
As the plurality of depth maps include the first depth map and the second depth map, it can be understood that, for each pixel point in the first depth map, if the depth value of the pixel point in the rendered object is smaller than the depth value of the pixel point in the first depth map, the color value of the pixel point in the rendered object and the target transparency value of the pixel point are determined as the initial rendering result of the pixel point corresponding to the first depth map; wherein the target transparency value is determined based on a blending mode preset by the rendering object. Meanwhile, aiming at each pixel point in the second depth map, if the depth value of the pixel point in the rendering object is smaller than that of the pixel point in the second depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as an initial rendering result of the pixel point corresponding to the second depth map; wherein the target transparency value is determined based on a blending mode preset by the rendering object.
And the rendering target stores the transparency value of each pixel point. The preset blending modes of the rendering object generally include a multiplication blending mode and an addition blending mode, and certainly include other blending modes; the following describes the process of determining the target transparency value in the multiplication blending mode and the addition blending mode.
If the preset mixed mode is a multiplication mixed mode, calculating a product value of the transparency value of the pixel point in the rendering object and the transparency value of the pixel point stored in the rendering target; subtracting the difference value of the product value from the transparency value of the pixel point stored in the render target to determine the transparency value as the target transparency value; and if the preset mixed mode is the addition mixed mode, determining the transparency value of the pixel point stored in the rendering target as the target transparency value.
Specifically, if the preset mixed mode is a multiplication mixed mode, the target transparency value of the pixel point in the first depth map may be calculated by the following formula:
Ad=(1-srcAlpha)*destAlpha;
the target transparency value of the pixel point in the same second depth map can be calculated by the following formula:
Ax=(1-srcAlpha)*destAlpha;
the Ad is a target transparency value of the pixel point in an initial rendering result corresponding to the first depth map; ax is a target transparency value of the pixel point in the initial rendering result corresponding to the second depth map; the src alpha is a transparency value of the pixel point stored in a preset rendering object; destAlpha is the transparency value of the pixel point stored in the rendering target, and the mode can obtain more accurate transparency value, so that the rendering effect is further improved.
(4) And determining the initial rendering result of each pixel point in the depth map as the pixel value of the pixel point corresponding to the rendering target corresponding to the depth map.
Rendering the target image since the initial rendering result is not rendered based directly on the target image; and the initial rendering result needs to be used for performing region division on the target image, so that the initial rendering result of each pixel point in the depth map can be determined as the pixel value of the pixel point corresponding to the rendering target corresponding to the depth map, and the initial rendering result is mainly used for storing the initial rendering results corresponding to the first depth map and the second depth map.
According to a specific implementation mode, a rendering object is drawn through a display card based on a first depth map, and an initial rendering result corresponding to the first depth map is obtained; specifically, the first depth map may be set as a current depth cache of DirectX (an application program interface, referred to as DX for short), so that the first depth map may be directly clipped to obtain an initial rendering result corresponding to the first depth map. The bottom implementation of the operation of setting the first depth map as the current depth cache (buffer) of DX is to enable DX to direct graphics card hardware to perform depth clipping through a graphics card driver. And drawing the rendering object based on the second depth map through the pixel shader to obtain an initial rendering result corresponding to the second depth map. In fact, no matter the initial rendering result is obtained through the display card or the pixel shader, the specific drawing principle is the same, and the color value of the pixel point in the rendering object is determined as the initial rendering result for each pixel point in each depth map if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the depth map; and simultaneously, determining the target transparency value of the pixel point according to the mixed mode of the rendering object, and determining the target transparency value of the pixel point as an initial rendering result.
In the above manner, the preset rendering object is drawn based on each depth map, the obtained initial rendering result corresponding to each depth map not only includes the color value but also includes the transparency value, and the pixel value of the initial rendering result is enriched.
In a specific embodiment, in order to improve the rendering effect of the target image and avoid the problem that the mosaic and the occlusion relation are incorrect in the edge region of the target image, as shown in fig. 2, the following describes in detail the process of dividing the target image into a plurality of rendering regions.
Step S202, calculating the error weight of each pixel point in the depth map based on a plurality of depth maps and the initial rendering result corresponding to each depth map; the error weight is used for indicating the error magnitude of the pixel value of each channel of the pixel point;
for example, if the error weight is large, it can be stated that the final rendering result of the pixel point cannot be determined by simply using the first depth map or the second depth map and the initial rendering result corresponding thereto, but needs to determine the final rendering result of the pixel point by using a mixed interpolation method.
The depth values of the first depth map and the second depth map, and the color values of the initial rendering result corresponding to the first depth map and the second depth map are required to pass; specifically, the error weight of each pixel point may be calculated by a difference between depth values of the same pixel point between the first depth map and the second depth map, a difference between color values of the same pixel point between initial rendering results corresponding to the first depth map and the second depth map, and a sum of derivatives of depth values of the same pixel point between the first depth map and the second depth map.
One specific embodiment: aiming at each pixel point in the depth map, setting the error weight of the pixel point as:
A=abs(zmax-zmin)*(maxDepthcolor-inDepthcolor)*(dzmaxX+dzmaxY);
wherein A is the error weight of the pixel point; abs is the absolute value of "(zmax-zmin) ((maxDepthcolor) -minDepthcolor) ((dzmaxX + dzmaxY)); zmax is a linear depth value of a pixel point in the first depth map; zmin is the linear depth of the pixel point in the second depth map; maxdeptholor is a color value of a pixel point in an initial rendering result corresponding to the first depth map; minDepthcolor is the color value of a pixel point in the initial rendering result corresponding to the second depth map; dzmaxX is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the x direction; and dzmaxY is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the y direction.
The linear depth value image zmax of the pixel point in the first depth map and the linear depth zmin of the pixel point in the second depth map may be calculated in the following manner:
zmax ═ 2.0 near1 near 1)/(far1+ near1- (depth1 at 2.0-1.0) (far 1-near 1)); zmin ═ 2.0 star 2 star 2)/(star 2+ star 2- (depth2 star 2.0-1.0) (star 2-star 2)); wherein depth1 is the original depth value of the pixel point in the first depth map; depth2 is the original depth value of the pixel point in the second depth map; near1 is the distance of the closest pixel that can be seen in the first depth map from the camera; near2 is the distance of the nearest pixel that can be seen in the second depth map from the camera; far1 is the distance of the farthest pixel from the camera that can be seen in the first depth map; far2 is the distance of the farthest pixel from the camera that can be seen in the second depth map.
Step S204, aiming at each pixel point in the depth maps, calculating the average depth value of the depth values of the pixel points in the depth maps;
specifically, for each pixel point in the depth map, an average value of the depth value of the pixel point in the first depth map and the depth value of the pixel point in the second depth map is calculated, that is, the average depth value is calculated.
Step S206, dividing the target image into a plurality of rendering areas according to the error weight and the average depth value of each pixel point in the depth map.
Specifically, the target area may be divided into a plurality of rendering areas according to the magnitude of the error weight, for example, if the error weight of the pixel point in the depth map is greater than a preset threshold, an image area corresponding to the pixel point may be determined as an edge area; if the error weight of the pixel point in the depth map is smaller than a preset threshold value, the image area of the target image corresponding to the pixel point can be determined as a non-edge area. In addition, in order to further improve the rendering effect, the non-edge region is further divided into a transition region, for example, if the error weight of the pixel point in the depth map is smaller than a preset threshold, but on the premise that the average depth value of the pixel point satisfies a preset condition, the error weight of the pixel point adjacent to the pixel point is greater than the preset threshold, and the image region of the target image corresponding to the pixel point may be determined as the transition region.
In the above manner, the error weight and the average depth value are calculated through the multiple depth maps and the initial rendering result corresponding to each depth map, and the target image can be divided into multiple rendering areas through the size of the error weight and the average depth value, so that the rendering effect and the rendering efficiency of the target image are improved, and the problems of mosaic occurrence and incorrect occlusion relation in the edge area of the target image are solved.
In a specific embodiment, the step of dividing the target image into a plurality of rendering regions according to the error weight and the average depth value of each pixel point in the depth map includes:
(1) aiming at each pixel point in the depth map, determining the error weight of the pixel point as a first pixel value of the pixel; determining a designated pixel point with an average depth value smaller than that of the pixel point from adjacent pixel points of the pixel point, and determining the maximum error weight of the pixel point and the designated pixel point as a second pixel value of the pixel point; wherein, the adjacent pixel point includes: the method comprises the following steps that a current pixel point and pixel points adjacent to the current pixel point in the horizontal direction and the vertical direction are obtained;
in actual implementation, a first rendering target can be generated in advance, the size of the first rendering target is the same as that of the depth map, and the error weight of each pixel point is determined as the pixel value of a first channel of the pixel point in the first rendering target; determining the average depth value of each pixel point as the pixel value of a second channel of the pixel point in the first rendering target; wherein the first channel may be an R channel and a G channel.
Then, a second rendering target is generated in advance, the size of the second rendering target is the same as that of the first rendering target, and the error weight of each pixel point is determined as the first pixel value of the first channel of the pixel point in the second rendering target, namely the first pixel value of the pixel point. Meanwhile, if the adjacent pixel point of the pixel point in the first rendering target comprises a designated pixel point which is smaller than the average depth value of the pixel point, determining the maximum error weight in the pixel point and the first pixel value of the first channel in the designated pixel point as the second pixel value of the second channel of the pixel point in the second rendering target; the adjacent pixel point includes: the difference of the sum of absolute values of the differences of coordinate values of the current pixel point on the x axis and the y axis is less than 1, namely, the current pixel point is adjacent to four pixel points, namely, the adjacent pixel points usually comprise four pixel points.
It should be noted that, if there is no designated pixel point with an average depth value smaller than the average depth value of the pixel point in the adjacent pixel point of the pixel point, the error weight of the pixel point is determined as the second pixel value of the pixel point.
(2) And dividing the target image into a plurality of rendering areas according to the first pixel value and the second pixel value of each pixel point in the depth map.
The first pixel value represents the original error weight of the pixel point, the second pixel value carries out primary region expansion on the adjacent pixel point with smaller depth value of the pixel point, and the designated pixel point meeting the preset condition and the maximum error weight of the pixel point are only represented by the second pixel value meeting the preset condition. And when the preset condition is not met, the value represented by the second pixel value is the original error weight of the pixel point. The preset condition is that a designated pixel point with the average depth value smaller than the pixel point exists in the adjacent pixel point of the pixel point in the first rendering target.
Specifically, the target image may be divided into a plurality of rendering regions according to the size of the first pixel value and the second pixel value of each pixel point in the depth map, when the first pixel value and the second pixel value are both small, the image region of the target image corresponding to the pixel point may be determined as a non-edge region, when the first pixel value and the second pixel value are both large, the image region of the target image corresponding to the pixel point may be determined as an edge region, when the first pixel value and the second pixel value are both large, the image region of the target image corresponding to the pixel point may be determined as a transition region.
In the above manner, the error weight of each pixel point is determined as the first pixel value of each pixel point; the method comprises the steps of performing region expansion on pixel points with smaller depth values from each pixel point to adjacent pixel points of the pixel point, determining the designated pixel point and the maximum error weight of the pixel point as the second pixel value of the pixel point, and finally performing region division on a target image according to the first pixel value and the second pixel value, so that the target image can be more accurately divided into an edge region and a non-edge region, and a transition region, and the rendering effect and the rendering efficiency are further improved.
In a specific embodiment, the step of dividing the target image into a plurality of rendering regions according to the first pixel value and the second pixel value of each pixel point in the depth map includes: for each image region in the target image, performing the following operations: if the first pixel value and the second pixel value of a target pixel point corresponding to the image area in the depth map are both smaller than a preset threshold value, dividing the image area into non-edge areas; if the first pixel value of the target pixel point corresponding to the image area in the depth map is smaller than a preset threshold value and the second pixel value of the target pixel point corresponding to the image area in the depth map is larger than the preset threshold value, dividing the image area into transition areas; and if the first pixel value of the target pixel point corresponding to the image area in the depth map is larger than a preset threshold value, dividing the image area into edge areas.
Each image area in the target image corresponds to each pixel point in the depth map. The preset threshold value can be set according to actual conditions and actual application scenes.
In a specific embodiment, in order to further improve the rendering effect of the target image, different rendering modes are adopted for different rendering areas, and the rendering area is rendered to obtain a rendered target image, which specifically includes:
determining a final rendering result of each pixel point in the target image by adopting a rendering mode preset in each rendering area based on an initial rendering result corresponding to each depth map; and mixing the final rendering result to the corresponding pixel point in the target image to obtain the rendered target image.
Specifically, based on the first depth map, an initial rendering result corresponding to the first depth map and the target image, a rendering mode preset in a non-edge area is adopted to determine a final rendering result of each pixel point in the non-edge area of the target image; determining a final rendering result of each pixel point in a transition region of the target image by adopting a rendering mode preset in the transition region based on the second depth map, and an initial rendering result and the target image corresponding to the second depth map; and determining a final rendering result of each pixel point in the edge region of the target image by adopting a rendering mode preset in the edge region based on the first depth map, the second depth map, and the initial rendering result and the target image respectively corresponding to the first depth map and the second depth map. And finally obtaining the final rendering result of each pixel point in the target image.
In a specific embodiment, the step of determining a final rendering result of each pixel point in the target image by using a preset rendering mode of each rendering area based on an initial rendering result corresponding to each depth map includes:
aiming at each pixel point of a non-edge area in a target image, setting the final rendering result of the pixel point as:
Res1=MaxDepthRT*(1-Ad)+MainRT*Ad;
res1 is the rendering result of the pixel point; the MaxDepthRT is the pixel value of a target pixel color channel corresponding to an image area to which the pixel belongs in an initial rendering result corresponding to the first depth map; ad is a pixel value of a target pixel transparency channel corresponding to an image area to which the pixel belongs in an initial rendering result corresponding to the first depth map; MainRT is the pixel value of the color channel of the pixel point in the target image.
Aiming at each pixel point in the transition region in the target image, setting the final rendering result of the pixel point as:
Res2=MinDepthRT*(1-Ax)+MainRT*Ax;
res2 is the rendering result of the pixel point; MinDepthRT is the pixel value of a target pixel color channel corresponding to the image area to which the pixel belongs in the initial rendering result corresponding to the second depth map; ax is a pixel value of a target pixel transparency channel corresponding to an image area to which the pixel belongs in an initial rendering result corresponding to the second depth map; MainRT is the pixel value of the color channel of the pixel point in the target image.
Aiming at each pixel point of the edge area in the target image, setting the final rendering result of the pixel point as:
Res3=MidDepthRT*(1-Am)+MainRT*Am;
MidDepthRes=lerp(MaxDepthRT,MinDepthRT,d-dx)/(dd-dx));
res3 is the rendering result of the pixel point; the MaxDepthRT is the pixel value of a target pixel color channel corresponding to an image area to which the pixel belongs in an initial rendering result corresponding to the first depth map; MinDepthRT is the pixel value of a target pixel color channel corresponding to the image area to which the pixel belongs in the initial rendering result corresponding to the second depth map; d is the depth value of the pixel point in the target image; dd is the depth value of the target pixel point corresponding to the image area to which the pixel point belongs in the first depth map; dx is the depth value of a target pixel point corresponding to the image area to which the pixel point belongs in the second depth map; lerp is a linear interpolation function; MidDepthRes is an interpolated image; MidDepthRT is the color value of the color channel of the pixel point in the interpolation image MidDepthRes; am is the transparency value of the transparency channel of the pixel point in the interpolated image MidDepthRes; MainRT is the pixel value of the color channel of the pixel point in the target image.
In the above manner, different rendering modes are adopted for different rendering areas to render the target image, so that the rendering efficiency can be ensured under the condition of reducing the filling rate in the special effect drawing, a more satisfactory rendering effect with fewer defects can be obtained, and the problem that the mosaic and shielding relation is incorrect in the area with discontinuous depth information is solved.
In addition, the rendering method is applied to the client, and the special effect rendering efficiency is improved by 4.5 times through tests; the method can not only basically and correctly process the shielding relation between the special effect and the object included in the target image, prevent the situations of mosaic, incorrect mixing and the like, improve the rendering effect, but also draw all the special effects only by once drawing, and improve the rendering efficiency.
Corresponding to the above method embodiment, an embodiment of the present invention provides a rendering apparatus in a game, as shown in fig. 3, the apparatus including:
the acquiring module 31 is configured to acquire multiple depth maps of the target image; wherein the target image comprises at least one image area; the depth map comprises depth information corresponding to the image area;
an initial rendering result determining module 32, configured to draw a preset rendering object based on the depth map, and obtain an initial rendering result of the rendering object corresponding to each depth map;
a rendering area determining module 33, configured to divide the target image into a plurality of rendering areas based on the plurality of depth maps and an initial rendering result corresponding to each depth map; the rendering area comprises an edge area, a non-edge area and a transition area between the edge area and the non-edge area;
and the rendering module 34 is configured to render the rendering region based on an initial rendering result of the rendering object corresponding to each depth map by using a rendering mode preset in each rendering region, so as to obtain a rendered target image.
The rendering device in the game acquires a plurality of depth maps of the target image; drawing a preset rendering object based on the depth map to obtain an initial rendering result of the rendering object corresponding to each depth map; dividing the target image into an edge area, a non-edge area and a transition area between the edge area and the non-edge area based on the multiple depth maps and an initial rendering result corresponding to each depth map; and rendering the rendering area by adopting a preset rendering mode of each rendering area based on the initial rendering result of the rendering object corresponding to each depth map to obtain a rendered target image. In the method, the depth information among the depth maps is different, the target image is divided into an edge area, a non-edge area and a transition area through the depth maps of the target image and the initial rendering result corresponding to each depth map, different rendering modes are adopted for rendering aiming at different areas, the problems that mosaics appear in the edge area and the transition area and the shielding relation is incorrect are avoided, the rendering effect and the rendering efficiency are improved, and the game experience of a player is further improved.
Further, the obtaining module is further configured to: according to the depth information corresponding to each image area in the target image, performing down-sampling processing on the target image to obtain a plurality of depth maps; and aiming at the same image area, the depth information corresponding to the image area in the different-depth map is different.
Further, the plurality of depth maps include a first depth map and a second depth map; for the same image area, the depth information corresponding to the image area in the first depth map is the maximum depth value in the image area, and the depth information corresponding to the image area in the second depth map is the minimum depth value in the image area.
Further, the initial rendering result determining module is further configured to: aiming at a plurality of depth maps, generating a rendering target corresponding to each depth map; responding to a drawing instruction aiming at the rendering object, and acquiring a depth value, a color value and a transparency value of each pixel point in the rendering object; for each pixel point in each depth map, if the depth value of the pixel point in the rendering object is smaller than that of the pixel point in the depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as an initial rendering result of the pixel point corresponding to the depth map; wherein the target transparency value is determined based on a preset mixing mode of the rendering object; and determining the initial rendering result of each pixel point in the depth map as the pixel value of the pixel point corresponding to the rendering target corresponding to the depth map.
Further, the initial rendering result determining module is further configured to: if the preset mixed mode is a multiplication mixed mode, calculating a product value of the transparency value of the pixel point in the rendering object and the transparency value of the pixel point stored in the rendering target; subtracting the difference value of the product value from the transparency value of the pixel point stored in the render target to determine the transparency value as the target transparency value; and if the preset mixed mode is the addition mixed mode, determining the transparency value of the pixel point stored in the rendering target as the target transparency value.
Further, the rendering area determining module is further configured to: calculating the error weight of each pixel point in the depth map based on the multiple depth maps and the initial rendering result corresponding to each depth map; the error weight is used for indicating the error magnitude of the pixel value of each channel of the pixel point; aiming at each pixel point in the depth maps, calculating the average depth value of the depth values of the pixel points in the depth maps; and dividing the target image into a plurality of rendering areas according to the error weight and the average depth value of each pixel point in the depth map.
Further, the rendering area determining module is further configured to: aiming at each pixel point in the depth map, setting the error weight of the pixel point as:
A=abs(zmax-zmin)*(maxDepthcolor-inDepthcolor)*(dzmaxX+dzmaxY);
wherein, A is the error weight of the pixel point; abs is the absolute value of "(zmax-zmin) ((maxDepthcolor) -minDepthcolor) ((dzmaxX + dzmaxY)); zmax is a linear depth value of a pixel point in the first depth map; zmin is the linear depth of the pixel point in the second depth map; maxdeptholor is a color value of a pixel point in an initial rendering result corresponding to the first depth map; minDepthcolor is the color value of a pixel point in the initial rendering result corresponding to the second depth map; dzmaxX is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the x direction; and dzmaxY is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the y direction.
Further, the rendering area determining module is further configured to: aiming at each pixel point in the depth map, determining the error weight of the pixel point as a first pixel value of the pixel; determining a designated pixel point with an average depth value smaller than that of the pixel point from adjacent pixel points of the pixel point, and determining the maximum error weight of the pixel point and the designated pixel point as a second pixel value of the pixel point; wherein, the adjacent pixel point includes: the current pixel point is adjacent to the pixel point in the horizontal direction and the vertical direction; and dividing the target image into a plurality of rendering areas according to the first pixel value and the second pixel value of each pixel point in the depth map.
Further, the rendering area determining module is further configured to: for each image region in the target image, performing the following operations: if the first pixel value and the second pixel value of a target pixel point corresponding to the image area in the depth map are both smaller than a preset threshold value, dividing the image area into non-edge areas; if the first pixel value of the target pixel point corresponding to the image area in the depth map is smaller than a preset threshold value and the second pixel value of the target pixel point corresponding to the image area in the depth map is larger than the preset threshold value, dividing the image area into transition areas; and if the first pixel value of the target pixel point corresponding to the image area in the depth map is larger than a preset threshold value, dividing the image area into edge areas.
Further, the rendering module is further configured to: determining a final rendering result of each pixel point in the target image by adopting a rendering mode preset in each rendering area based on an initial rendering result corresponding to each depth map; and mixing the final rendering result to the corresponding pixel point in the target image to obtain the rendered target image.
Further, the rendering module is further configured to: aiming at each pixel point of a non-edge area in a target image, setting the final rendering result of the pixel point as: res1 ═ maxdeptrt (1-Ad) + MainRT Ad; res1 is the rendering result of the pixel point; the MaxDepthRT is the pixel value of a target pixel point color channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to the first depth map; ad is a pixel value of a target pixel transparency channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to the first depth map; MainRT is the pixel value of the color channel of the pixel point in the target image.
Further, the rendering module is further configured to: aiming at each pixel point in the transition region in the target image, setting the final rendering result of the pixel point as: res2 ═ MinDepthRT (1-Ax) + MainRT Ax; res2 is the rendering result of the pixel point; MinDepthRT is the pixel value of a target pixel color channel corresponding to the image area to which the pixel belongs in the initial rendering result corresponding to the second depth map; ax is a pixel value of a target pixel transparency channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to the second depth map; MainRT is the pixel value of the color channel of the pixel point in the target image.
Further, the rendering module is further configured to: aiming at each pixel point of the edge area in the target image, setting the final rendering result of the pixel point as: res3 ═ MidDepthRT (1-Am) + MainRT ·am; MidDepthRes ═ lerp (maxdepthtrt, MinDepthRT, d-dx)/(dd-dx)); res3 is the rendering result of the pixel point; the MaxDepthRT is the pixel value of a target pixel point color channel corresponding to an image area to which a pixel point belongs in an initial rendering result corresponding to the first depth map; MinDepthRT is the pixel value of a target pixel color channel corresponding to the image area to which the pixel belongs in the initial rendering result corresponding to the second depth map; d is the depth value of the pixel point in the target image; dd is the depth value of a target pixel point corresponding to the image area to which the pixel point belongs in the first depth map; dx is the depth value of a target pixel point corresponding to the image area to which the pixel point belongs in the second depth map; lerp is a linear interpolation function; MidDepthRes is an interpolated image; MidDepthRT is the color value of a color channel of a pixel point in the interpolation image MidDepthRes; am is a transparency value of a pixel transparency channel in the interpolated image MidDepthRes; MainRT is the pixel value of the color channel of the pixel point in the target image.
The rendering device in the game provided by the embodiment of the invention has the same technical characteristics as the rendering method in the game provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment also provides an electronic device, which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to realize the rendering method in the game. The electronic device may be a server or a terminal device.
Referring to fig. 4, the electronic device includes a processor 100 and a memory 101, the memory 101 stores machine executable instructions capable of being executed by the processor 100, and the processor 100 executes the machine executable instructions to implement the rendering method in the game.
Further, the electronic device shown in fig. 4 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The Memory 101 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but that does not indicate only one bus or one type of bus.
Processor 100 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 100. The Processor 100 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The present embodiments also provide a machine-readable storage medium having stored thereon machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the rendering method in a game as described above.
The rendering method and apparatus in a game and the computer program product of the electronic device provided in the embodiments of the present invention include a computer-readable storage medium storing program codes, where instructions included in the program codes may be used to execute the method described in the foregoing method embodiments, and specific implementations may refer to the method embodiments and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood in specific cases for those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that the following embodiments are merely illustrative of the present invention, and not restrictive, and the scope of the present invention is not limited thereto: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (16)

1. A method of rendering in a game, the method comprising:
acquiring a plurality of depth maps of a target image; wherein the target image comprises at least one image region; the depth map comprises depth information corresponding to the image area;
drawing a preset rendering object based on the depth map to obtain an initial rendering result of the rendering object corresponding to each depth map;
dividing the target image into a plurality of rendering areas based on the plurality of depth maps and the initial rendering result corresponding to each depth map; wherein the rendering region comprises an edge region, a non-edge region, and a transition region between the edge region and the non-edge region;
and rendering the rendering area by adopting a preset rendering mode of each rendering area based on the initial rendering result of the rendering object corresponding to each depth map to obtain the rendered target image.
2. The method of claim 1, wherein the step of obtaining multiple depth maps of the target image comprises:
according to the depth information corresponding to each image area in the target image, performing down-sampling processing on the target image to obtain a plurality of depth maps; and aiming at the same image area, the depth information corresponding to the image area in different depth maps is different.
3. The method of claim 1, wherein the plurality of depth maps comprises a first depth map and a second depth map; for the same image area, the depth information corresponding to the image area in the first depth map is a maximum depth value in the image area, and the depth information corresponding to the image area in the second depth map is a minimum depth value in the image area.
4. The method according to claim 1, wherein the step of obtaining an initial rendering result of the rendering object corresponding to each depth map by drawing a preset rendering object based on the depth map comprises:
generating a rendering target corresponding to each depth map aiming at the plurality of depth maps;
responding to a drawing instruction aiming at the rendering object, and acquiring a depth value, a color value and a transparency value of each pixel point in the rendering object;
for each pixel point in each depth map, if the depth value of the pixel point in the rendering object is smaller than the depth value of the pixel point in the depth map, determining the color value of the pixel point in the rendering object and the target transparency value of the pixel point as an initial rendering result of the pixel point corresponding to the depth map; wherein the target transparency value is determined based on a preset blending mode of the rendering object;
and determining the initial rendering result of each pixel point in the depth map as the pixel value of the pixel point corresponding to the rendering target corresponding to the depth map.
5. The method according to claim 4, wherein the rendering target stores a transparency value of each pixel point; the step of determining the target transparency value based on a preset blending mode of the rendering object includes:
if the preset mixed mode is a multiplication mixed mode, calculating a product value of the transparency value of the pixel point in the rendering object and the transparency value of the pixel point stored in the rendering target;
subtracting the difference value of the product value from the transparency value of the pixel point stored in the render target to determine the transparency value as the target transparency value;
and if the preset mixed mode is an addition mixed mode, determining the transparency value of the pixel point stored in the render target as the target transparency value.
6. The method of claim 1, wherein the step of dividing the target image into a plurality of rendering regions based on the plurality of depth maps and the initial rendering result corresponding to each depth map comprises:
calculating the error weight of each pixel point in the depth map based on the multiple depth maps and the initial rendering result corresponding to each depth map; wherein the error weight is used for indicating the error magnitude of the pixel value of each channel of the pixel point;
calculating the average depth value of the depth values of the pixel points in the plurality of depth maps aiming at each pixel point in the depth maps;
and dividing the target image into a plurality of rendering areas according to the error weight and the average depth value of each pixel point in the depth map.
7. The method of claim 6, wherein the step of calculating the error weight of each pixel point in the depth map based on the plurality of depth maps and the initial rendering result corresponding to each depth map comprises:
aiming at each pixel point in the depth map, setting the error weight of the pixel point as: a ═ abs (zmax-zmin) — (maxDepthcolor) — (dzmaxX + dzmaxY);
wherein A is the error weight of the pixel point; abs is the absolute value of the "(zmax-zmin) (maxDepthcolor) -minDepthcolor) ((dzmaxX + dzmaxY)"; zmax is a linear depth value of the pixel point in the first depth map; zmin is the linear depth of the pixel point in the second depth map; maxdeptholor is a color value of the pixel point in the initial rendering result corresponding to the first depth map; minDepthcolor is the color value of the pixel point in the initial rendering result corresponding to the second depth map; dzmaxX is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the x direction; and dzmaxY is the absolute value of the partial derivative of the linear depth value of the pixel point in the first depth map in the y direction.
8. The method of claim 6, wherein the step of dividing the target image into a plurality of rendering regions according to the error weight and the average depth value of each pixel point in the depth map comprises:
aiming at each pixel point in the depth map, determining the error weight of the pixel point as a first pixel value of the pixel; determining a designated pixel point with an average depth value smaller than that of the pixel point from adjacent pixel points of the pixel point, and determining the maximum error weight between the pixel point and the designated pixel point as a second pixel value of the pixel point; wherein the adjacent pixel point comprises: the current pixel point is adjacent to the pixel point in the horizontal direction and the vertical direction;
and dividing the target image into a plurality of rendering areas according to the first pixel value and the second pixel value of each pixel point in the depth map.
9. The method of claim 8, wherein the step of dividing the target image into a plurality of rendering regions according to the first pixel value and the second pixel value of each pixel point in the depth map comprises:
for each image region in the target image, performing the following operations:
if the first pixel value and the second pixel value of a target pixel point corresponding to the image area in the depth map are both smaller than a preset threshold value, dividing the image area into the non-edge area;
if the first pixel value of the target pixel point corresponding to the image area in the depth map is smaller than the preset threshold value and the second pixel value of the target pixel point corresponding to the image area in the depth map is larger than the preset threshold value, dividing the image area into the transition area;
and if the first pixel value of the target pixel point corresponding to the image area in the depth map is larger than the preset threshold value, dividing the image area into the edge area.
10. The method according to claim 1, wherein the step of rendering the rendering area by using a preset rendering mode of each rendering area based on an initial rendering result of the rendering object corresponding to each depth map to obtain the rendered target image comprises:
determining a final rendering result of each pixel point in the target image by adopting a preset rendering mode of each rendering area based on the initial rendering result corresponding to each depth map;
and mixing the final rendering result to the corresponding pixel point in the target image to obtain the rendered target image.
11. The method according to claim 10, wherein the step of determining a final rendering result of each pixel point in the target image by using a preset rendering mode of each rendering area based on the initial rendering result corresponding to each depth map comprises:
aiming at each pixel point of a non-edge area in the target image, setting the final rendering result of the pixel point as:
Res1=MaxDepthRT*(1-Ad)+MainRT*Ad;
res1 is the rendering result of the pixel point; MaxDepthRT is the pixel value of a target pixel color channel corresponding to an image area to which the pixel belongs in an initial rendering result corresponding to the first depth map; ad is a pixel value of a target pixel point transparency channel corresponding to an image area to which the pixel point belongs in an initial rendering result corresponding to the first depth map; MainRT is the pixel value of the pixel point color channel in the target image.
12. The method according to claim 10, wherein the step of determining a final rendering result of each pixel point in the target image by using a preset rendering mode of each rendering area based on the initial rendering result corresponding to each depth map comprises:
aiming at each pixel point in the transition region in the target image, setting the final rendering result of the pixel point as:
Res2=MinDepthRT*(1-Ax)+MainRT*Ax;
res2 is the rendering result of the pixel point; MinDepthRT is the pixel value of a target pixel color channel corresponding to the image area to which the pixel belongs in the initial rendering result corresponding to the second depth map; ax is a pixel value of a target pixel transparency channel corresponding to an image area to which the pixel point belongs in an initial rendering result corresponding to the second depth map; MainRT is the pixel value of the pixel point color channel in the target image.
13. The method according to claim 10, wherein the step of determining a final rendering result of each pixel point in the target image by using a preset rendering mode of each rendering area based on the initial rendering result corresponding to each depth map comprises:
aiming at each pixel point of the edge area in the target image, setting the final rendering result of the pixel point as:
Res3=MidDepthRT*(1-Am)+MainRT*Am;
MidDepthRes=lerp(MaxDepthRT,MinDepthRT,d-dx)/(dd-dx));
res3 is the rendering result of the pixel point; MaxDepthRT is the pixel value of a target pixel color channel corresponding to an image area to which the pixel belongs in an initial rendering result corresponding to the first depth map; MinDepthRT is the pixel value of a target pixel color channel corresponding to the image area to which the pixel belongs in the initial rendering result corresponding to the second depth map; d is the depth value of the pixel point in the target image; dd is the depth value of a target pixel point corresponding to the image area to which the pixel point belongs in the first depth map; dx is the depth value of a target pixel point corresponding to the image area to which the pixel point belongs in the second depth map; lerp is a linear interpolation function; MidDepthRes is an interpolated image; MidDepthRT is the color value of the color channel of the pixel point in the interpolation image MidDepthRes; am is a transparency value of the pixel transparency channel in the interpolated image MidDepthRes; MainRT is the pixel value of the pixel point color channel in the target image.
14. An in-game rendering apparatus, comprising:
the acquisition module is used for acquiring a plurality of depth maps of the target image; wherein the target image comprises at least one image region; the depth map comprises depth information corresponding to the image area;
an initial rendering result determining module, configured to draw a preset rendering object based on the depth map, and obtain an initial rendering result of the rendering object corresponding to each depth map;
a rendering region determining module, configured to divide the target image into a plurality of rendering regions based on the plurality of depth maps and the initial rendering result corresponding to each depth map; wherein the rendering region comprises an edge region, a non-edge region, and a transition region between the edge region and the non-edge region;
and the rendering module is used for rendering the rendering area by adopting a preset rendering mode of each rendering area based on the initial rendering result of the rendering object corresponding to each depth map to obtain the rendered target image.
15. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the in-game rendering method of any one of claims 1-13.
16. A machine-readable storage medium having stored thereon machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the in-game rendering method of any of claims 1-13.
CN202111076331.XA 2021-09-14 2021-09-14 Rendering method and device in game and electronic equipment Active CN113781620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111076331.XA CN113781620B (en) 2021-09-14 2021-09-14 Rendering method and device in game and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111076331.XA CN113781620B (en) 2021-09-14 2021-09-14 Rendering method and device in game and electronic equipment

Publications (2)

Publication Number Publication Date
CN113781620A true CN113781620A (en) 2021-12-10
CN113781620B CN113781620B (en) 2023-06-30

Family

ID=78843872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111076331.XA Active CN113781620B (en) 2021-09-14 2021-09-14 Rendering method and device in game and electronic equipment

Country Status (1)

Country Link
CN (1) CN113781620B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090002365A1 (en) * 2007-06-27 2009-01-01 Nintendo Co., Ltd. Image processing program and image processing apparatus
CN102708585A (en) * 2012-05-09 2012-10-03 北京像素软件科技股份有限公司 Method for rendering contour edges of models
US20150161813A1 (en) * 2011-10-04 2015-06-11 Google Inc. Systems and method for performing a three pass rendering of images
CN104794699A (en) * 2015-05-08 2015-07-22 四川天上友嘉网络科技有限公司 Image rendering method applied to games
CN109767466A (en) * 2019-01-10 2019-05-17 深圳看到科技有限公司 Picture rendering method, device, terminal and corresponding storage medium
CN112316424A (en) * 2021-01-06 2021-02-05 腾讯科技(深圳)有限公司 Game data processing method, device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090002365A1 (en) * 2007-06-27 2009-01-01 Nintendo Co., Ltd. Image processing program and image processing apparatus
US20150161813A1 (en) * 2011-10-04 2015-06-11 Google Inc. Systems and method for performing a three pass rendering of images
CN102708585A (en) * 2012-05-09 2012-10-03 北京像素软件科技股份有限公司 Method for rendering contour edges of models
CN104794699A (en) * 2015-05-08 2015-07-22 四川天上友嘉网络科技有限公司 Image rendering method applied to games
CN109767466A (en) * 2019-01-10 2019-05-17 深圳看到科技有限公司 Picture rendering method, device, terminal and corresponding storage medium
CN112316424A (en) * 2021-01-06 2021-02-05 腾讯科技(深圳)有限公司 Game data processing method, device and storage medium

Also Published As

Publication number Publication date
CN113781620B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
US7453459B2 (en) Composite rendering 3-D graphical objects
US7532222B2 (en) Anti-aliasing content using opacity blending
US6982723B1 (en) Method and apparatus for eliminating unwanted steps at edges in graphic representations in the line raster
US10699466B2 (en) Apparatus and method for generating a light intensity image
US20140118351A1 (en) System, method, and computer program product for inputting modified coverage data into a pixel shader
US20070120858A1 (en) Generation of motion blur
US8854392B2 (en) Circular scratch shader
CN109542574B (en) Pop-up window background blurring method and device based on OpenGL
Malan Edge Antialiasing by Post-Processing
US11783527B2 (en) Apparatus and method for generating a light intensity image
CN113781620A (en) Rendering method and device in game and electronic equipment
CN112419147B (en) Image rendering method and device
CN114359451A (en) Method and system for accelerating image rendering using motion compensation
CN116670719A (en) Graphic processing method and device and electronic equipment
Gao et al. Virtual view synthesis based on DIBR and image inpainting
CN116156089B (en) Method, apparatus, computing device and computer readable storage medium for processing image
US11288788B2 (en) Anti-aliasing for distance field graphics rendering
CN112862930A (en) Game scene processing method and device and electronic equipment
CN111951343A (en) Image generation method and device and image display method and device
CN117218270A (en) Rendering method and device of transition region, electronic equipment and storage medium
CN117876274A (en) Method, apparatus, computing device and computer readable storage medium for processing image
CN115761100A (en) Scene rendering method and device, electronic equipment and storage medium
CN115984390A (en) Image processing method, related device, storage medium and program product
BR112021015772A2 (en) METHOD FOR GENERATING A LIGHT INTENSITY IMAGE, APPARATUS FOR GENERATING A LIGHT INTENSITY IMAGE AND COMPUTER PROGRAM PRODUCT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant