CN108830923B - Image rendering method and device and storage medium - Google Patents

Image rendering method and device and storage medium Download PDF

Info

Publication number
CN108830923B
CN108830923B CN201810589016.9A CN201810589016A CN108830923B CN 108830923 B CN108830923 B CN 108830923B CN 201810589016 A CN201810589016 A CN 201810589016A CN 108830923 B CN108830923 B CN 108830923B
Authority
CN
China
Prior art keywords
shadow
terrain
illumination sampling
illumination
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810589016.9A
Other languages
Chinese (zh)
Other versions
CN108830923A (en
Inventor
吴东东
杜志荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201810589016.9A priority Critical patent/CN108830923B/en
Publication of CN108830923A publication Critical patent/CN108830923A/en
Application granted granted Critical
Publication of CN108830923B publication Critical patent/CN108830923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides an image rendering method, an image rendering device and a storage medium, wherein a graphical user interface is obtained by executing software application rendering, and the content presented by the graphical user interface comprises a game scene picture and at least partially comprises a controllable virtual object. The image rendering method comprises the following steps: when the virtual object is detected to move to the range of the terrain unit, acquiring shadow brightness information of the illumination sampling point corresponding to the terrain unit, wherein the terrain unit is a composition unit of the terrain in the game scene picture, and the shadow brightness information comprises a gray value of the illumination sampling point and a characteristic value of whether the gray value is located in the shadow; and sampling shadow shading information, and rendering the virtual object. The embodiment of the invention can reduce the cost of game performance while avoiding the same illumination brightness of the virtual object under the shadow and outside the shadow when rendering the game picture.

Description

Image rendering method and device and storage medium
Technical Field
Embodiments of the present invention relate to image processing technologies, and in particular, to an image rendering method and apparatus, and a storage medium.
Background
In network games, dynamic objects, such as virtual objects, are often involved.
At present, in order to avoid the problem that the illumination brightness of a dynamic object under a shadow is the same as that of the dynamic object outside the shadow, the prior art renders a game picture by a point cloud technology. The point clouds are illumination information of the surfaces of all objects in the off-line calculation game picture, are loaded into a 3D data structure during game running, and when the dynamic objects are rendered, a plurality of point clouds closest to the dynamic objects are found out first, and the influence of the point clouds on the dynamic objects is calculated.
Although the prior art can make the illumination brightness of the dynamic object under the shadow different from that of the dynamic object outside the shadow, the prior art can generate larger game performance expense.
Disclosure of Invention
Embodiments of the present invention provide an image rendering method, an image rendering device, and a storage medium, so as to reduce game performance overhead while avoiding that illumination brightness of a virtual object under a shadow is the same as that of an object outside the shadow when a game picture is rendered.
In a first aspect, an embodiment of the present invention provides an image rendering method, which obtains a graphical user interface by executing software application rendering, where content presented by the graphical user interface includes a game scene picture and at least partially includes a controllable virtual object. The image rendering method comprises the following steps: when the virtual object is detected to move to the range of the terrain unit, acquiring shadow brightness information of the illumination sampling points corresponding to the terrain unit, wherein the terrain unit is a composition unit of terrain in a game scene picture, and the shadow brightness information comprises gray values of the illumination sampling points and characteristic values of the illumination sampling points whether the illumination sampling points are located in shadows or not; and sampling shadow shading information, and rendering the virtual object.
In a second aspect, an embodiment of the present invention provides an image rendering apparatus, which obtains a graphical user interface by executing software application rendering, where content presented by the graphical user interface includes a game scene picture and at least partially includes a manipulable virtual object. The image rendering apparatus includes: the acquisition module is used for acquiring shadow brightness information of the illumination sampling points corresponding to the terrain units when the virtual object is detected to move to the range of the terrain units, the terrain units are the composition units of the terrain in the game scene picture, and the shadow brightness information comprises the gray values of the illumination sampling points and characteristic values of the illumination sampling points whether the gray values are located in the shadow or not; and the processing module is used for sampling shadow shading information and rendering the virtual object.
In a third aspect, an embodiment of the present invention provides a terminal, including:
a memory for storing executable instructions of the processor;
a processor; and
a display device;
wherein the processor is configured to perform the image rendering method of the first aspect via execution of the executable instructions and render a graphical user interface on the display device.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the image rendering method according to the first aspect.
The embodiment of the invention provides an image rendering method, an image rendering device and a storage medium, which are at least provided with the following components
Has the advantages that:
when the virtual object is detected to move to the range of the terrain unit, the shadow brightness information of the illumination sampling point corresponding to the terrain unit is obtained, and the virtual object is rendered by sampling the shadow brightness information. Because the shadow brightness information is the shadow brightness information of the illumination sampling point corresponding to the terrain unit to which the virtual object moves, and the data volume of the shadow brightness information is greatly lower than that of the dynamic object rendering performed by the point cloud technology in the prior art, when the method provided by the embodiment is used for rendering the game picture, the illumination brightness of the virtual object under the shadow is different from that outside the shadow, and the game performance cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, a brief description will be given below of the drawings required for the embodiments or the technical solutions in the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of an image rendering method according to an embodiment of the present invention;
fig. 2 is a schematic view of a virtual object provided by an embodiment of the present invention in a scene outside a shadow;
fig. 3 is a schematic view of a scenario of a virtual object under a shadow according to an embodiment of the present invention;
FIG. 4 is a flowchart of an image rendering method according to another embodiment of the present invention;
fig. 5 is a scene example diagram of obtaining illumination sampling points corresponding to a terrain unit by using ray collision detection according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an image rendering apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises" and "comprising," and any variations thereof, in the description of embodiments of the invention, the claims, and the drawings described above, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Aiming at the problem that a game picture is rendered through a point cloud technology in the prior art and larger game performance expenditure is generated, the embodiment of the invention provides an image rendering method, an image rendering device and a storage medium.
Fig. 1 is a flowchart of an image rendering method according to an embodiment of the present invention. The execution main body of the method can be a background server of the game software/client, or a terminal installed by the game software/client, but is not limited to this. Examples of the terminal include a desktop computer, a notebook, a Personal Digital Assistant (PDA), a smart phone, a tablet computer, and a game machine. In this embodiment, a terminal is used as an execution main body for description, and the execution manner of the server is similar, which is not described herein again.
The graphical user interface is rendered by executing a software application. Illustratively, a graphical user interface is obtained by executing a software application on a processor of the terminal and rendering the software application on a display device of the terminal, so that when a game picture is rendered, the illumination brightness of a virtual object under a shadow is different from that of the virtual object outside the shadow, and meanwhile, the game performance overhead is reduced. Specifically, the player may download game software installed to the terminal, and play the game by operating on the terminal.
The graphical user interface is an interface for the terminal to interact with the user, and the user can control the game running in the terminal through the operation of the graphical user interface. Meanwhile, the content presented by the graphical user interface comprises a game scene picture and at least partially comprises a controllable virtual object.
As shown in fig. 1, the method of the present embodiment includes:
s101, when the virtual object is detected to move to the range of the terrain unit, acquiring shadow brightness information of the illumination sampling point corresponding to the terrain unit.
The terrain unit is a component unit of the terrain in the game scene picture. Specifically, the terrain in the game scene picture may be divided according to a preset size to obtain a plurality of terrain units. As for the shape of the terrain cells, the terrain cells can be regular patterns or irregular patterns, and square or rectangular terrain cells are adopted in the embodiment of the invention for convenient processing.
The shadow brightness information comprises the gray value of the illumination sampling point and the characteristic value of whether the illumination sampling point is positioned in the shadow. The feature value is set in advance as long as it can be discriminated whether or not it is located in the shadow. Illustratively, the illumination sampling point corresponds to a characteristic value of 0 when the illumination sampling point is located in the shadow, and corresponds to a characteristic value of 1 when the illumination sampling point is located outside the shadow. The gray value is a value obtained by converting an RGB color value into a numerical value to represent brightness, and the larger the gray value is, the brighter the gray value is.
In practical applications, a user may control the virtual object to move in a game scene rendered by the graphical user interface. When the virtual object moves to different positions, such as under shadow and out of shadow, the illumination brightness of the virtual object is different as shown in fig. 2 and 3, where fig. 2 shows a scene where the virtual object is out of shadow and fig. 3 shows a scene where the virtual object is under shadow. Therefore, according to the geographical position of the virtual object, the embodiment of the present invention obtains the factors affecting the illumination brightness of the virtual object, that is, the shadow brightness information of the illumination sampling point. Fig. 2 and 3 both illustrate the virtual object as a human, but the embodiment of the present invention does not limit the virtual object, and the virtual object may be a movable dynamic object.
In general, the illumination sampling points corresponding to different terrain units are different. However, the embodiments of the present invention do not exclude a special case where the illumination sampling points corresponding to different terrain units may be the same. In addition, the shadow shading information of different illumination sampling points may be different, or the shadow shading information of some or all of the illumination sampling points may be the same, which is not limited in the embodiments of the present invention.
It should be noted that, the shadow brightness information of the illumination sampling points corresponding to each terrain unit is obtained by the terminal in an off-line state, in some embodiments, "obtaining" in this step may be understood as "reading", and compared with a manner of obtaining the shadow brightness information of the illumination sampling points corresponding to the terrain units in real time, the obtaining efficiency is obviously improved. The specific acquisition mode is described in the following embodiments, and is not described herein again.
S102, shadow shading information is sampled, and the virtual object is rendered.
Compared with the prior art, when the dynamic object is rendered, the method has the advantages that the point clouds closest to the dynamic object are firstly found out, the implementation mode of the influences of the point clouds on the dynamic object is calculated, and the step only samples shadow brightness information of the illumination sampling points corresponding to the terrain unit to render the virtual object. Obviously, the data volume of the shadow shading information of the illumination sampling point corresponding to the terrain unit is less than that of the point clouds closest to the dynamic object, and therefore, the efficiency is higher.
In this embodiment, when it is detected that the virtual object moves into the range of the terrain unit, the shadow shading information of the illumination sampling point corresponding to the terrain unit is obtained, and the virtual object is rendered by sampling the shadow shading information. Because the shadow brightness information is the shadow brightness information of the illumination sampling point corresponding to the terrain unit to which the virtual object moves, and the data volume of the shadow brightness information is greatly lower than that of the dynamic object rendering performed by the point cloud technology in the prior art, when the method provided by the embodiment is used for rendering the game picture, the illumination brightness of the virtual object under the shadow is different from that outside the shadow, and the game performance cost is reduced.
In the foregoing embodiment, in an implementation manner, S102, sampling shadow shading information, and performing rendering processing on a virtual object may specifically be: shadow brightness information is sampled; and calculating the color value of each pixel point corresponding to the virtual object according to the sampled data.
Fig. 4 is a flowchart of an image rendering method according to another embodiment of the present invention. Referring to fig. 4, on the basis of the flow shown in fig. 1, the image rendering method in this embodiment may include:
s401, adopting ray collision detection to obtain illumination sampling points corresponding to the terrain units.
Optionally, the step may specifically be: performing ray collision detection from the highest point downwards corresponding to the preset position of the terrain unit; an illumination sample is obtained each time a ray hits an obstacle.
Referring to fig. 5, when the virtual object is located on the roof, the range of the terrain unit where the virtual object is located is first determined, and then ray collision detection is performed downwards from the highest point corresponding to a preset position of the terrain unit, for example, the center of the terrain unit, and there are two collisions into the obstacle: the first time of collision is carried out on the roof, and an illumination sampling point is obtained; then, the next ray collision detection is carried out from the collision point downwards, and the ray collision detection is carried out for the second time, wherein the ray collision detection is carried out on the ground (as shown in figure 5): and sequentially collecting a plurality of illumination sampling points corresponding to the terrain unit at the intersection point of the ray and the roof and the intersection point of the ray and the ground. It can be understood that the highest point is the highest point of the game scene picture presented by the graphical user interface corresponding to the preset position of the terrain unit.
Further, the number of the preset positions may be 1 or more. It is understood that plurality includes two or more. For example, when the number of the preset positions may be 1, the preset position may be a center position of the terrain unit or any other position; when the number of the preset positions is multiple, for example, 4, one terrain cell may be divided into 2 × 2 cells again, and the above-mentioned process of obtaining the illumination sampling points is performed at a random position in each cell. Optionally, the four illumination sampling points are subjected to one-time interpolation, so that the obtained shadow brightness value can reflect the illumination information of the terrain unit better, and the rendering effect is better.
S402, acquiring shadow brightness information of the illumination sampling points.
Still taking fig. 5 as an example, the first illumination sampling point, i.e. the intersection point of the ray and the roof, has a corresponding characteristic value of 1; and the second illumination sampling point, namely the intersection point of the ray and the ground, has a corresponding characteristic value of 0. As for the obtaining of the gray value corresponding to each illumination sampling point, reference may be made to the prior art, and the embodiment of the present invention is not explained again.
Optionally, the step may comprise: acquiring UV coordinates of the obstacle; and acquiring shadow brightness information of the illumination sampling point according to the UV coordinate. Exemplarily, since most models in the game are triangle modeling, UV coordinates in a triangle corresponding to an obstacle are obtained, and max (grayscale, alpha) is taken as shadow shading information of the sampled illumination point according to the coordinate sampled illumination map, where alpha represents a characteristic value of whether the sampled illumination point is in the shadow, and grayscale represents a gray value of a color of the sampled illumination point.
In consideration of different terrain units or different lattices in the same terrain unit, the number of the illumination sampling points corresponding to the lattices can be different, so that shadow brightness information of the illumination sampling points corresponding to adjacent terrain units (or lattices) is filtered once, and the numerical value of the connection place of the terrain units (or lattices) cannot be changed suddenly. Those skilled in the art will appreciate that the filtering herein is performed by averaging the sum of 9 pixels around a pixel. And after the illumination sampling points are mapped to the UV coordinates, the corresponding points of the illumination sampling points are pixels.
In summary, after obtaining the shadow shading information of the illumination sampling point, the image rendering method may further include:
and S403, filtering shadow shading information of illumination sampling points corresponding to adjacent terrain units (or grids).
In addition, aiming at the condition that the number of illumination sampling points corresponding to adjacent terrain units is different: taking fig. 5 as an example, one terrain unit corresponds to the place where the house is located (2 illumination sampling points), and one terrain unit corresponds to the open space outside the house (1 illumination sampling point). In this case, the illumination sampling points are expanded to a greater number, the corresponding characteristic values of the excess illumination sampling points are 1, and the gray value can be obtained according to the following formula: the gray value is 0.2126 red intensity value +0.7152 green intensity value +0.0722 blue intensity value, which represents a multiplication sign, and then the light sampling points which are close to each other are interpolated with each other, i.e., averaged after summing.
Therefore, after acquiring the shadow shading information of the illumination sampling point, the image rendering method of this embodiment may further include:
and S404, carrying out interpolation processing on illumination sampling points corresponding to adjacent terrain units.
The gray value of the illumination sampling point obtained through interpolation is an average value of the gray values of the adjacent illumination sampling points, and the characteristic value of the illumination sampling point obtained through interpolation is a preset numerical value, such as 1.
It should be noted that the step of performing interpolation processing on the illumination sampling points corresponding to the adjacent terrain units may also be performed after filtering the shadow and shade information of the illumination sampling points corresponding to the adjacent terrain units (or grids). S403 and S404 are optional steps, and may be selectively performed.
S405, when the virtual object is detected to move to the range of the terrain unit, acquiring shadow brightness information of the illumination sampling point corresponding to the terrain unit.
The step is the same as S101, and is not described herein again.
S406, shadow shading information is sampled, and the virtual object is rendered.
The step is the same as step S102, and is not described herein.
In this embodiment, S401 to S404 may be offline operations, that is, performed before the game is run; alternatively, S401 to S404 may be online operations, that is, performed during the game execution, as long as it is guaranteed to be before S405.
On the basis, the shading information of the originally obtained (before processing) illumination sampling points or the data obtained by the processing (including at least one of interpolation processing and filtering processing) is further stored. That is, after obtaining the shadow shading information of the illumination sampling points, the image rendering method may further include: and storing the shadow brightness information of the illumination sampling points corresponding to the terrain units. Illustratively, shadow shading information of the illumination sampling points or data obtained through the processing (including at least one of interpolation processing and filtering processing) are compressed and stored in a file according to the terrain units, and the data starting position corresponding to each terrain unit is recorded at the head of the file, so that data of a specified terrain unit can be loaded for rendering when a game runs conveniently.
On the basis of the flow shown in fig. 1, in step S101, before obtaining shadow shading information of an illumination sampling point corresponding to a terrain unit when the virtual object is detected to move into the range of the terrain unit, the image rendering method may further include: when the terrain element is within the user field of view, the terrain element is rendered on the user graphical interface.
The following is an embodiment of the apparatus of the present invention, which can be used to implement the above-mentioned embodiment of the method of the present invention, and the implementation principle and technical effects are similar.
Fig. 6 is a schematic structural diagram of an image rendering apparatus according to an embodiment of the present invention. The image rendering device obtains a graphical user interface through the rendering of the software application, and the content presented by the graphical user interface comprises a game scene picture and at least partially comprises a controllable virtual object. As shown in fig. 6, the image rendering device 60 of the present embodiment includes: an acquisition module 61 and a processing module 62. Wherein,
the obtaining module 61 is configured to obtain shadow brightness information of the illumination sampling point corresponding to the terrain unit when it is detected that the virtual object moves into the range of the terrain unit, where the terrain unit is a component unit of the terrain in the game scene picture, and the shadow brightness information includes a gray value of the illumination sampling point and a characteristic value of whether the gray value is located in a shadow.
The processing module 62 is configured to sample shadow shading information and perform rendering processing on the virtual object.
When the device of the embodiment detects that the virtual object moves into the range of the terrain unit, the shadow brightness information of the illumination sampling point corresponding to the terrain unit is acquired, and the virtual object is rendered by sampling the shadow brightness information. Because the shadow shading information is the shadow shading information of the illumination sampling point corresponding to the terrain unit to which the virtual object moves, and the data volume of the shadow shading information is greatly lower than that of the dynamic object rendering performed by the point cloud technology in the prior art, when the device provided by the embodiment is used for rendering a game picture, the effect that the illumination brightness of the virtual object under the shadow is different from that outside the shadow can be achieved, and the game performance overhead is reduced.
Optionally, the processing module 62 may be specifically configured to: sampling the shadow brightness information; and calculating the color value of each pixel point corresponding to the virtual object according to the sampled data.
Further, the obtaining module 61 is further configured to: before acquiring shadow brightness information of illumination sampling points corresponding to the terrain units, adopting ray collision detection to obtain the illumination sampling points corresponding to the terrain units; and acquiring shadow brightness information of the illumination sampling point.
In some embodiments, when the obtaining module 61 is configured to perform ray collision detection to obtain the illumination sampling points corresponding to the terrain units, specifically: performing ray collision detection from the highest point downwards corresponding to the preset position of the terrain unit; an illumination sample point is obtained for each time a ray hits an obstacle.
The number of the preset positions can be 1 or more.
In some embodiments, when the obtaining module 61 is configured to obtain the shadow shading information of the illumination sampling point, the obtaining module may specifically be: acquiring UV coordinates of the obstacle; and acquiring shadow brightness information of the illumination sampling point according to the UV coordinate.
Further, the processing module 62 is further configured to: after the obtaining module 61 obtains the shadow brightness information of the illumination sampling point, the shadow brightness information of the illumination sampling point corresponding to the adjacent terrain unit is filtered.
And/or, the processing module 62 is further configured to: after the obtaining module 61 obtains the shadow brightness information of the illumination sampling points, interpolation processing is performed on the illumination sampling points corresponding to adjacent terrain units to unify the number of the illumination sampling points of different terrain units. The gray value of the illumination sampling point obtained through interpolation is the average value of the gray values of the adjacent illumination sampling points, and the characteristic value of the illumination sampling point obtained through interpolation is a preset value. The predetermined value is, for example, 1.
Optionally, the image rendering device 60 may further include a storage module (not shown). The storage module is used for: after the acquisition module 61 acquires the shadow shading information of the illumination sampling point, the corresponding terrain unit stores the shadow shading information of the illumination sampling point.
In some embodiments, the processing module 62 may also be configured to: before the obtaining module 61 obtains the shadow shading information of the illumination sampling point corresponding to the terrain unit, when the terrain unit is in the user visual field range, rendering the terrain unit on the user graphical interface.
Fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention. As shown in fig. 7, the terminal 70 of the present embodiment includes: a memory 71, a processor 72 and a display device 73. Wherein the memory 71, the processor 72 and the display device 73 are interconnected. The graphical user interface is obtained by executing a software application on the processor 72 and rendering on the display device 73. The content presented by the graphical user interface comprises a game scene picture and at least partially comprises a controllable virtual object.
A memory 71 for storing program instructions.
A processor 72 for implementing the following steps when the program instructions are executed:
when the virtual object is detected to move to the range of the terrain unit, acquiring shadow brightness information of the illumination sampling point corresponding to the terrain unit, wherein the terrain unit is a composition unit of the terrain in the game scene picture, and the shadow brightness information comprises a gray value of the illumination sampling point and a characteristic value of whether the gray value is located in the shadow;
and sampling shadow shading information, and rendering the virtual object.
Optionally, the processor 72 is configured to implement the steps of: the shadow shading information is sampled, and when the virtual object is rendered, the method specifically comprises the following steps: shadow brightness information is sampled; and calculating the color value of each pixel point corresponding to the virtual object according to the sampled data.
Optionally, the processor 72 is further configured to implement the following steps when the program instructions are executed: before acquiring shadow brightness information of illumination sampling points corresponding to the terrain units, adopting ray collision detection to obtain the illumination sampling points corresponding to the terrain units; and acquiring shadow brightness information of the illumination sampling point.
Optionally, the processor 72 is configured to implement the steps of: adopt ray collision detection, when obtaining the illumination sampling point that the topography unit corresponds, specifically do: performing ray collision detection from the highest point downwards corresponding to the preset position of the terrain unit; an illumination sample is obtained each time a ray hits an obstacle.
The number of the preset positions can be 1 or more.
In some embodiments, the processor 72 is configured to implement the steps of, when the program instructions are executed: when the shadow brightness information of the illumination sampling point is obtained, the method specifically comprises the following steps: acquiring UV coordinates of the obstacle; and acquiring shadow brightness information of the illumination sampling point according to the UV coordinate.
In some embodiments, the processor 72 is further configured to implement the following steps when the program instructions are executed: and after the shadow brightness information of the illumination sampling points is obtained, filtering the shadow brightness information of the illumination sampling points corresponding to the adjacent terrain units.
In some embodiments, the processor 72 is further configured to implement the following steps when the program instructions are executed: after the shadow brightness information of the illumination sampling points is obtained, interpolation processing is carried out on the illumination sampling points corresponding to the adjacent terrain units. The gray value of the illumination sampling point obtained through interpolation is the average value of the gray values of the adjacent illumination sampling points, and the characteristic value of the illumination sampling point obtained through interpolation is a preset value.
In some embodiments, the processor 72 is further configured to implement the following steps when the program instructions are executed: and after the shadow brightness information of the illumination sampling points is acquired, the corresponding terrain units store the shadow brightness information of the illumination sampling points.
Further, the processor 72 is also configured to implement the following steps when the program instructions are executed: and when the virtual object is detected to move to the range of the terrain unit, rendering the terrain unit on a user graphical interface before acquiring shadow brightness information of the illumination sampling point corresponding to the terrain unit and when the terrain unit is in the user visual field range.
When the terminal of the embodiment detects that the virtual object moves into the range of the terrain unit, the terminal acquires the shadow brightness information of the illumination sampling point corresponding to the terrain unit, and performs rendering processing on the virtual object by sampling the shadow brightness information. Because the shadow brightness information is the shadow brightness information of the illumination sampling point corresponding to the terrain unit to which the virtual object moves, and the data volume of the shadow brightness information is greatly lower than that of the dynamic object rendering performed by the point cloud technology in the prior art, when the terminal provided by the embodiment is used for rendering the game picture, the illumination brightness of the virtual object under the shadow is different from that outside the shadow, and the game performance cost is reduced.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program can be executed by the processor 72 shown in fig. 7 or a processor in other devices to implement the image rendering method according to any of the above embodiments, and specific implementation and beneficial effects of the method can be seen in the above embodiments, which are not described herein again.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media capable of storing program codes, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (12)

1. An image rendering method, wherein a graphical user interface is rendered by executing a software application, the content presented by the graphical user interface including a game scene and at least partially including a manipulable virtual object, the image rendering method comprising:
when the virtual object is detected to move to the range of a terrain unit, acquiring shadow brightness information of an illumination sampling point corresponding to the terrain unit, wherein the terrain unit is a component unit of terrain in the game scene picture, and the shadow brightness information comprises a gray value of the illumination sampling point and a characteristic value of whether the illumination sampling point is located in a shadow;
sampling the shadow shading information, and rendering the virtual object;
when it is detected that the virtual object moves into the range of the terrain unit, before acquiring shadow shading information of the illumination sampling point corresponding to the terrain unit, the method further includes:
rendering the terrain unit on a user graphical interface when the terrain unit is within a user field of view.
2. The image rendering method according to claim 1, wherein the sampling the shadow shading information to perform rendering processing on the virtual object includes:
sampling the shadow brightness information;
and calculating the color value of each pixel point corresponding to the virtual object according to the sampled data.
3. The image rendering method according to claim 1, wherein before obtaining the shadow shading information of the illumination sampling points corresponding to the terrain cells, the method further comprises:
adopting ray collision detection to obtain illumination sampling points corresponding to the terrain units;
and acquiring shadow brightness information of the illumination sampling points.
4. The image rendering method according to claim 3, wherein the obtaining of the illumination sampling points corresponding to the terrain cells by using ray collision detection comprises:
performing ray collision detection from the highest point downwards corresponding to the preset position of the terrain unit;
an illumination sample is obtained each time a ray hits an obstacle.
5. The image rendering method according to claim 4, wherein the number of the preset positions is 1 or more.
6. The image rendering method according to claim 3, wherein the obtaining of the shadow shading information of the illumination sampling points comprises:
acquiring UV coordinates of the obstacle;
and acquiring shadow brightness information of the illumination sampling point according to the UV coordinate.
7. The image rendering method according to claim 3, wherein after the obtaining of the shading information of the illumination sampling points, the method further comprises:
and filtering the shadow brightness information of the illumination sampling points corresponding to the adjacent terrain units.
8. The image rendering method according to claim 3, wherein after acquiring the shadow shading information of the illumination sampling points, the method further comprises:
and performing interpolation processing on the illumination sampling points corresponding to the adjacent terrain units, wherein the gray value of the illumination sampling point obtained through interpolation is the average value of the gray values of the adjacent illumination sampling points, and the characteristic value of the illumination sampling point obtained through interpolation is a preset numerical value.
9. The image rendering method according to claim 3, wherein after acquiring the shadow shading information of the illumination sampling points, the method further comprises:
and storing the shadow brightness information of the illumination sampling points corresponding to the terrain units.
10. An image rendering apparatus that renders a graphical user interface by executing a software application, the graphical user interface presenting content that includes a game scene and at least partially contains a manipulable virtual object, the image rendering apparatus comprising:
the acquisition module is used for acquiring shadow brightness information of illumination sampling points corresponding to a terrain unit when the virtual object is detected to move to the range of the terrain unit, wherein the terrain unit is a component unit of terrain in the game scene picture, and the shadow brightness information comprises gray values of the illumination sampling points and characteristic values of the illumination sampling points whether the gray values are located in the shadow or not;
the processing module is used for sampling the shadow shading information and rendering the virtual object;
the processing module is further configured to: before the acquisition module acquires the shadow brightness information of the illumination sampling point corresponding to the terrain unit, when the terrain unit is in the user visual field range, the terrain unit is rendered on the user graphical interface.
11. A terminal, comprising:
a memory for storing executable instructions of the processor;
a processor; and
a display device;
wherein the processor is configured to perform the image rendering method of any of claims 1 to 9 via execution of the executable instructions and render a graphical user interface on the display device.
12. A computer-readable storage medium on which a computer program is stored, the computer program, when being executed by a processor, implementing the image rendering method of any one of claims 1 to 9.
CN201810589016.9A 2018-06-08 2018-06-08 Image rendering method and device and storage medium Active CN108830923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810589016.9A CN108830923B (en) 2018-06-08 2018-06-08 Image rendering method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810589016.9A CN108830923B (en) 2018-06-08 2018-06-08 Image rendering method and device and storage medium

Publications (2)

Publication Number Publication Date
CN108830923A CN108830923A (en) 2018-11-16
CN108830923B true CN108830923B (en) 2022-06-17

Family

ID=64143572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810589016.9A Active CN108830923B (en) 2018-06-08 2018-06-08 Image rendering method and device and storage medium

Country Status (1)

Country Link
CN (1) CN108830923B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901905B (en) * 2019-02-28 2023-03-10 网易(杭州)网络有限公司 Picture color modulation method, device, equipment and computer readable storage medium
CN109993823B (en) 2019-04-11 2022-11-25 腾讯科技(深圳)有限公司 Shadow rendering method, device, terminal and storage medium
CN111744183B (en) * 2020-07-02 2024-02-09 网易(杭州)网络有限公司 Illumination sampling method and device in game and computer equipment
CN112023400B (en) * 2020-07-24 2024-07-26 上海米哈游天命科技有限公司 Altitude map generation method, device, equipment and storage medium
CN113421199B (en) * 2021-06-23 2024-03-12 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN116503520A (en) * 2022-10-20 2023-07-28 腾讯科技(深圳)有限公司 Illumination control method, device, computer equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009140135A (en) * 2007-12-05 2009-06-25 Copcom Co Ltd Game program, recording medium with the game program recorded thereon, and computer
CN102254340A (en) * 2011-07-29 2011-11-23 北京麒麟网信息科技有限公司 Method and system for drawing ambient occlusion images based on GPU (graphic processing unit) acceleration
CN103761763A (en) * 2013-12-18 2014-04-30 微软公司 Method for constructing reinforced reality environment by utilizing pre-calculation
CN103995700A (en) * 2014-05-14 2014-08-20 无锡梵天信息技术股份有限公司 Method for achieving global illumination of 3D game engine
CN105321200A (en) * 2015-07-10 2016-02-10 苏州蜗牛数字科技股份有限公司 Offline rendering preprocessing method
CN105825544A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN106485776A (en) * 2016-11-23 2017-03-08 北京三体高创科技有限公司 A kind of method and system of 3D game real-time rendering large scale scene
CN106780709A (en) * 2016-12-02 2017-05-31 腾讯科技(深圳)有限公司 A kind of method and device for determining global illumination information
CN106898040A (en) * 2017-03-06 2017-06-27 网易(杭州)网络有限公司 Virtual resource object rendering intent and device
CN107358643A (en) * 2017-07-04 2017-11-17 网易(杭州)网络有限公司 Image processing method, device, electronic equipment and storage medium
CN107909638A (en) * 2017-11-15 2018-04-13 网易(杭州)网络有限公司 Rendering intent, medium, system and the electronic equipment of dummy object
CN108090945A (en) * 2017-11-03 2018-05-29 腾讯科技(深圳)有限公司 Object rendering intent and device, storage medium and electronic device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8542231B2 (en) * 2009-06-29 2013-09-24 Crytek Gmbh Method, computer graphics image rendering system and computer-readable data storage medium for computing of indirect illumination in a computer graphics image of a scene
WO2011155102A1 (en) * 2010-06-11 2011-12-15 株式会社ソニー・コンピュータエンタテインメント Game device, game control method, and game control program
US10290148B2 (en) * 2011-04-14 2019-05-14 Suntracker Technologies Ltd. System and method for real time dynamic lighting simulation
US9245376B2 (en) * 2013-05-14 2016-01-26 Roblox Corporation Lighting management in virtual worlds
US9854176B2 (en) * 2014-01-24 2017-12-26 Lucasfilm Entertainment Company Ltd. Dynamic lighting capture and reconstruction

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009140135A (en) * 2007-12-05 2009-06-25 Copcom Co Ltd Game program, recording medium with the game program recorded thereon, and computer
CN102254340A (en) * 2011-07-29 2011-11-23 北京麒麟网信息科技有限公司 Method and system for drawing ambient occlusion images based on GPU (graphic processing unit) acceleration
CN103761763A (en) * 2013-12-18 2014-04-30 微软公司 Method for constructing reinforced reality environment by utilizing pre-calculation
CN103995700A (en) * 2014-05-14 2014-08-20 无锡梵天信息技术股份有限公司 Method for achieving global illumination of 3D game engine
CN105321200A (en) * 2015-07-10 2016-02-10 苏州蜗牛数字科技股份有限公司 Offline rendering preprocessing method
CN105825544A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN106485776A (en) * 2016-11-23 2017-03-08 北京三体高创科技有限公司 A kind of method and system of 3D game real-time rendering large scale scene
CN106780709A (en) * 2016-12-02 2017-05-31 腾讯科技(深圳)有限公司 A kind of method and device for determining global illumination information
CN106898040A (en) * 2017-03-06 2017-06-27 网易(杭州)网络有限公司 Virtual resource object rendering intent and device
CN107358643A (en) * 2017-07-04 2017-11-17 网易(杭州)网络有限公司 Image processing method, device, electronic equipment and storage medium
CN108090945A (en) * 2017-11-03 2018-05-29 腾讯科技(深圳)有限公司 Object rendering intent and device, storage medium and electronic device
CN107909638A (en) * 2017-11-15 2018-04-13 网易(杭州)网络有限公司 Rendering intent, medium, system and the electronic equipment of dummy object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于环境映照自动对齐的高质量虚实融合技术;夏麟等;《计算机辅助设计与图形学学报》;20111015;第23卷(第10期);第1641-1648页 *
射线与地形的碰撞检测;MinSen;《https://blog.csdn.net/minsenwu/article/details/8104258》;20121023;第1-5页 *

Also Published As

Publication number Publication date
CN108830923A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108830923B (en) Image rendering method and device and storage medium
CN110990516B (en) Map data processing method, device and server
US20190073747A1 (en) Scaling render targets to a higher rendering resolution to display higher quality video frames
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN108848367B (en) Image processing method and device and mobile terminal
CN106898040B (en) Virtual resource object rendering method and device
CN111882632B (en) Surface detail rendering method, device, equipment and storage medium
CN111311756A (en) Augmented reality AR display method and related device
US20240087219A1 (en) Method and apparatus for generating lighting image, device, and medium
CN112652046B (en) Game picture generation method, device, equipment and storage medium
CN114565708A (en) Method, device and equipment for selecting anti-aliasing algorithm and readable storage medium
CN108230434B (en) Image texture processing method and device, storage medium and electronic device
CN117218273A (en) Image rendering method and device
CN109697748A (en) Model compression processing method, model pinup picture processing method device, storage medium
CN113052923A (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
CN115512025A (en) Method and device for detecting model rendering performance, electronic device and storage medium
CN112973121B (en) Reflection effect generation method and device, storage medium and computer equipment
CN107945202B (en) Image segmentation method and device based on adaptive threshold value and computing equipment
CN112231020B (en) Model switching method and device, electronic equipment and storage medium
CN118154661A (en) Method, device, equipment and storage medium for opening degree analysis of three-dimensional space
CN108734712B (en) Background segmentation method and device and computer storage medium
CN111242838B (en) Blurred image rendering method and device, storage medium and electronic device
CN117237514A (en) Image processing method and image processing apparatus
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN109658360B (en) Image processing method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant