CN114904272A - Game rendering method and device, electronic equipment and medium - Google Patents

Game rendering method and device, electronic equipment and medium Download PDF

Info

Publication number
CN114904272A
CN114904272A CN202210599547.2A CN202210599547A CN114904272A CN 114904272 A CN114904272 A CN 114904272A CN 202210599547 A CN202210599547 A CN 202210599547A CN 114904272 A CN114904272 A CN 114904272A
Authority
CN
China
Prior art keywords
distance
determining
game
pixel point
rendered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210599547.2A
Other languages
Chinese (zh)
Inventor
张惠康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210599547.2A priority Critical patent/CN114904272A/en
Publication of CN114904272A publication Critical patent/CN114904272A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the invention provides a game rendering method, a game rendering device, electronic equipment and a medium, wherein the method comprises the following steps: determining target pixel points of game characters in image frames of game images, and determining distance information between pixel points to be rendered and the target pixel points; determining a depth image corresponding to the image frame, and determining depth information of a blocking object corresponding to the pixel point to be rendered in the image frame in the depth image; and if the distance value corresponding to the distance information is greater than the distance value corresponding to the depth information, performing shadow rendering on the pixel point to be rendered. According to the game rendering method using the depth image, the shadow rendering is determined by comparing each pixel point to be rendered, the display precision of the final shadow area can reach the pixel level, and the shadow display effect is greatly improved.

Description

Game rendering method and device, electronic equipment and medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a game rendering method, a game rendering apparatus, an electronic device, and a computer-readable storage medium.
Background
The two-dimensional scene in the game is formed by overlapping a plurality of plane pictures, and the two-dimensional scene is added with dynamic shadows, so that the reality of the scene can be improved, and more interesting mechanisms can be provided for game playing.
The conventional two-dimensional scene dynamic shadow generation method is mainly based on ray detection of a physical engine, and comprises the steps of completing collision calculation with a blocking object in a Central Processing Unit (CPU), generating a Mesh (Mesh) of a shadow area in real time according to a collision point, and rendering by the engine.
The scheme has an obvious defect that when more blocking objects exist, the shape is more complex, and the precision requirement of shadow Mesh is higher, the calculation pressure of a CPU (Central processing Unit) is amplified in a geometric level, so that the running efficiency of a game is reduced, and the overall experience of the game is influenced finally.
Disclosure of Invention
In view of the above, embodiments of the present invention are proposed in order to provide a game rendering method and a corresponding game rendering apparatus, an electronic device, and a computer-readable storage medium that overcome or at least partially solve the above problems.
The embodiment of the invention discloses a game rendering method, which comprises the following steps:
determining target pixel points of game characters in image frames of game images, and determining distance information between pixel points to be rendered and the target pixel points;
determining a depth image corresponding to the image frame, and determining depth information of a blocking object corresponding to the pixel point to be rendered in the image frame in the depth image;
and if the distance value corresponding to the distance information is larger than the distance value corresponding to the depth information, performing shadow rendering on the pixel point to be rendered.
Optionally, the determining a depth image corresponding to the image frame comprises:
acquiring contour information of the blocking object edited in advance; the contour information comprises contour line segments connected by contour points in sequence;
determining the visual field range of the game role;
determining the distance between a point on the contour line segment falling into the view range and a coordinate origin in a coordinate system taking the target pixel point as the coordinate origin, and determining the included angle between a line segment formed by connecting the point on the contour line segment falling into the view range and the coordinate origin as end points and the X axis of the coordinate system;
and establishing a mapping relation between the distance and the included angle, and drawing the depth image based on the mapping relation.
Optionally, the determining the visual field range of the game character includes:
and taking the target pixel point as a circle center and a circle with a preset distance as a radius as the visual field range of the game role.
Optionally, the establishing a mapping relationship between the distance and the included angle, and drawing the depth image based on the mapping relationship includes:
and mapping the included angle to the abscissa of the depth image, and mapping the distance to the pixel value of the pixel point corresponding to the abscissa.
Optionally, the determining depth information of a blocking object in the depth image, which corresponds to the pixel point to be rendered in the image frame, includes:
determining a target included angle between a line segment formed by connecting the pixel point to be rendered and the origin of coordinates as end points and an X axis of a coordinate system in the coordinate system taking the target pixel point as the origin of coordinates;
and determining a target distance corresponding to the target included angle from the depth image based on the mapping relation, and taking the target distance as the depth information of the blocking object in the depth image.
Optionally, the mapping the distance to the pixel value of the pixel point corresponding to the abscissa includes:
determining a currently stored distance value in pixel values of pixel points corresponding to the abscissa;
judging whether the distance value corresponding to the distance is smaller than the currently stored distance value or not;
if the distance value corresponding to the distance is smaller than the currently stored distance value, mapping the distance value corresponding to the distance to be the pixel value of the pixel point corresponding to the abscissa; otherwise, keeping the pixel value of the pixel point corresponding to the abscissa unchanged.
Optionally, the method is applied to a graphics processor, the depth image having a size of 1024 x 1.
The embodiment of the invention also discloses a game rendering device, which comprises:
the first determining module is used for determining target pixel points of game characters in image frames of game images and determining distance information between pixel points to be rendered and the target pixel points;
the second determining module is used for determining a depth image corresponding to the image frame and determining depth information of a blocking object corresponding to the pixel point to be rendered in the image frame in the depth image;
and the rendering module is used for performing shadow rendering on the pixel point to be rendered if the distance value corresponding to the distance information is greater than the distance value corresponding to the depth information.
Optionally, the second determining module includes:
the acquisition sub-module is used for acquiring the contour information of the blocking object edited in advance; the contour information includes contour line segments connected in sequence by contour points;
the first determining submodule is used for determining the visual field range of the game role;
the second determining submodule is used for determining the distance between a point on the contour line segment falling into the visual field range and a coordinate origin in a coordinate system taking the target pixel point as the coordinate origin, and determining the included angle between a line segment formed by connecting the point on the contour line segment falling into the visual field range and the coordinate origin as end points and the X axis of the coordinate system;
and the drawing submodule is used for establishing a mapping relation between the distance and the included angle and drawing the depth image based on the mapping relation.
Optionally, the first determining sub-module includes:
and the determining unit is used for taking a circle which takes the target pixel point as a circle center and takes a preset distance as a radius as the visual field range of the game role.
Optionally, the rendering submodule includes:
and the mapping unit is used for mapping the included angle to the abscissa of the depth image and mapping the distance to the pixel value of the pixel point corresponding to the abscissa.
Optionally, the second determining module includes:
the third determining submodule is used for determining a target included angle between a line segment formed by connecting the pixel point to be rendered and the origin of coordinates as end points and an X axis of a coordinate system in the coordinate system taking the target pixel point as the origin of coordinates;
and the fourth determining submodule is used for determining a target distance corresponding to the target included angle from the depth image based on the mapping relation, and taking the target distance as the depth information of the blocking object in the depth image.
Optionally, the mapping unit includes:
the determining subunit is used for determining a currently stored distance value in the pixel values of the pixel points corresponding to the abscissa;
the judging subunit is used for judging whether the distance value corresponding to the distance is smaller than the currently stored distance value;
a mapping subunit, configured to map, if the distance value corresponding to the distance is smaller than the currently stored distance value, the distance value corresponding to the distance to a pixel value of a pixel point corresponding to the abscissa; otherwise, keeping the pixel value of the pixel point corresponding to the abscissa unchanged.
Optionally, the apparatus is applied to a graphics processor, the depth image having a size of 1024 x 1.
The embodiment of the invention also discloses an electronic device, which comprises: a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program when executed by the processor implementing the steps of a game rendering method as described above.
The embodiment of the invention also discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the steps of the game rendering method are realized.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the distance information between the pixel point to be rendered and the target pixel point is compared with the depth information of the blocking object corresponding to the pixel point to be rendered in the depth image, so that whether the pixel point to be rendered needs to be subjected to shadow rendering can be determined, by adopting the method, a rendering method for determining which pixels are in the shadow rendering area range and which pixels are in the non-shadow rendering area range by using the depth image is provided, the method can be executed in a graphic processor, real-time shadow calculation based on a GPU is carried out during the operation, a large amount of calculation burden of a CPU is transferred to the GPU by using the powerful parallel calculation capacity of the GPU, the integral operation efficiency of a game is improved, and each pixel point is compared to determine whether the shadow rendering is carried out, so that the display precision of the final shadow area can reach the precision of a pixel level, the shadow display effect is greatly improved.
Drawings
FIG. 1 is a schematic diagram of shadow effects in a game;
FIG. 2 is a schematic diagram of shadow effects in another game;
FIG. 3 is a flowchart illustrating steps of a method for rendering a game according to an embodiment of the present invention;
FIG. 4 is a flow chart of steps of another game rendering method provided by an embodiment of the invention;
FIG. 5 is a schematic illustration of a depth image;
FIG. 6 is a schematic illustration of a depth image in accordance with an embodiment of the present invention;
FIG. 7 is a model diagram of the positional relationship of a game character to a blocking object;
FIG. 8 is a depth image mapped according to the position relationship of FIG. 7;
FIG. 9 is a flow chart of a method of game rendering according to an embodiment of the present invention;
fig. 10 is a block diagram of a game rendering apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of them. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
The two-dimensional scene is formed by overlapping a plurality of plane pictures, and the two-dimensional scene is added with dynamic shadows, so that the sense of reality of the scene can be improved, and more interesting mechanisms can be provided for game playing.
Referring to fig. 1, a shadow effect in a game is shown. Under the view of player 1 (player 1 in the figure), players 3 and 5 (player 3 and player 5 in the figure) blocked by the wall/scene object can only see parts of the body, and the rest is not displayed under the shadow area; since the door between player No. 1 and player No. 2 (player 2 in the figure) is opened, player No. 2 is in the visible area of player No. 1.
Referring to fig. 2, a schematic diagram of a shadow effect in another game is shown. After the door between player No. 1 and player No. 2 is closed, player No. 2 is shifted from the visible area of fig. 1 at player No. 1 to a shaded area blocked by the door and not shown in the game.
The two-dimensional scene dynamic shadow generation method is mainly based on ray detection of a physical engine, collision calculation with a blocking object is completed in a central processing unit (CPU for short), a Mesh (Mesh) of a shadow area is generated in real time according to a collision point, and finally rendering is carried out by the engine.
The scheme has an obvious defect that when the number of blocked objects is more, the shape is more complex, and the precision requirement of the shadow Mesh is higher, the calculation pressure of a CPU is amplified in a geometric level, so that the running efficiency of the game is reduced, and the overall experience of the game is influenced finally.
One of the core concepts of the embodiments of the present invention is that the distance information between a pixel point to be rendered and a target pixel point is compared with the depth information of a blocking object corresponding to the pixel point to be rendered in a depth image, so as to determine whether the pixel point to be rendered needs to be subjected to shadow rendering, and by adopting the above method, a rendering method is provided that determines which pixels are within a shadow rendering area range and which pixels are within a non-shadow rendering area range by using the depth image, and the method can be executed in a graphics processor, and is based on real-time shadow calculation of a GPU during operation, a large amount of calculation burden of a CPU is transferred to the GPU by using the strong parallel calculation capability of the GPU, so that the overall operation efficiency of a game is improved, and each pixel point is compared to determine whether the shadow rendering is performed, so that the display accuracy of a final shadow area reaches the accuracy of a pixel level, the shadow display effect is greatly improved.
Referring to fig. 3, a flowchart illustrating steps of a game rendering method according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 301, determining target pixel points of a game role in an image frame of a game picture, and determining distance information between pixel points to be rendered and the target pixel points.
A Graphical User Interface (GUI) refers to a computer operation User Interface displayed in a Graphical manner. Most games interact with the game player through a graphical user interface.
In the embodiment of the invention, after the game is started, a game picture can be displayed in the graphical user interface. A character model of a player-controlled game character can be displayed on the game screen.
The target pixel point where the game role is located in the image frame of the game picture can be determined, and the distance information between the pixel point to be rendered and the target pixel point in the image frame is determined. The pixel points to be rendered refer to pixel points which are not subjected to shadow rendering.
Step 302, determining a depth image corresponding to the image frame, and determining depth information of a blocking object corresponding to the pixel point to be rendered in the image frame in the depth image.
A depth image corresponding to the current image frame is determined, wherein the depth image may be a depth texture map, which is a technique used in 3D computer graphics and computer vision, an image or image channel containing information about the distance of the surface of the scene object to the viewpoint for simulating or reconstructing the 3D shape.
In the embodiment of the present invention, a depth image may be configured in advance for an image frame that needs to be shadow-rendered, and the depth image may be determined according to a positional relationship between a game character and a blocking object in the image frame. After configuring the depth image for the image frame, a mapping relationship may be established between the image frame and the depth image. In one example, the image frames correspond to depth images one to one, and for one image frame which needs to be shadow rendered currently, a depth image corresponding to the image frame can be determined.
After determining the depth image corresponding to the current image frame, a blocking object corresponding to the pixel point to be rendered may be determined, for example, the target pixel point may be used as a viewpoint, the direction in which the pixel point to be rendered is located is observed, the blocking object in the direction of the line of sight is determined, and depth information of the blocking object in the depth image may be determined.
Step 303, if the distance value corresponding to the distance information is greater than the distance value corresponding to the depth information, performing shadow rendering on the pixel point to be rendered.
In the embodiment of the invention, the distance information between the pixel point to be rendered and the target pixel point is compared with the depth information of the blocking object corresponding to the pixel point to be rendered in the depth image, so that whether the pixel point to be rendered needs to be subjected to shadow rendering or not can be determined.
If the distance value corresponding to the distance information is larger than the distance value corresponding to the depth information, performing shadow rendering on the pixel point to be rendered; and if the distance value corresponding to the distance information is not greater than the distance value corresponding to the depth information, performing shadow rendering on the pixel point to be rendered.
After all the pixel points which are determined to be required to be subjected to shadow rendering in the image frame are rendered, the image frame can be displayed, and therefore the rendered shadow effect is displayed.
In summary, in the embodiments of the present invention, the distance information between the pixel point to be rendered and the target pixel point is compared with the depth information of the blocking object corresponding to the pixel point to be rendered in the depth image, so as to determine whether the pixel point to be rendered needs to be shadow rendered, and by using the above method, a rendering method is provided that determines which pixels are within the shadow rendering area range and which pixels are within the non-shadow rendering area range by using the depth image, and the method can be executed in a graphics processor, and is based on real-time shadow calculation of a GPU during operation, and transfers a large amount of calculation burden of a CPU to the GPU by using a strong parallel calculation capability of the GPU, so as to improve the overall operation efficiency of the game, and compare each pixel point to determine whether the shadow rendering is performed, so that the display accuracy of the final shadow area can reach the accuracy of a pixel level, the shadow display effect is greatly improved.
Referring to fig. 4, a flowchart illustrating steps of another game rendering method provided in an embodiment of the present invention is shown, which may specifically include the following steps:
step 401, determining target pixel points of a game role in an image frame of a game picture, and determining distance information between pixel points to be rendered and the target pixel points.
In the embodiment of the invention, after the game is started, a game picture can be displayed in the graphical user interface. A character model of a player-controlled game character can be displayed on the game screen.
The target pixel point where the game role is located in the image frame of the game picture can be determined, and the distance information between the pixel point to be rendered and the target pixel point in the image frame is determined. The pixel points to be rendered refer to pixel points which are not subjected to shadow rendering.
In an alternative embodiment, a game rendering method of embodiments of the present invention may be performed in a graphics processor. Real-time shadow calculation is performed based on a GPU (graphics processing unit), a large amount of calculation burden of the CPU is transferred to the GPU by utilizing the powerful parallel calculation capacity of the GPU, and the overall operation efficiency of the game can be improved.
Step 402, determining a depth image corresponding to the image frame.
A depth image corresponding to the current image frame is determined, wherein the depth image may be a depth texture map.
With respect to step 402, the following steps may be performed:
and a sub-step S11 of obtaining the contour information of the previously edited blocking object.
Wherein the contour information includes contour line segments connected in order by contour points.
In the embodiment of the present invention, the contour information of the blocking object may be edited in an engine editor, such as a Unity engine or a Cocos Creator engine, during the game screen production stage. The outline of the blocking object can be edited through a polygon editing function provided by a game engine, and the outline of the blocking object is formed by connecting N points in a certain sequence in a two-dimensional plane. When a game object in the game is blocked by a blocking object, a corresponding shadow effect needs to be rendered, so that the reality sense of the game can be improved.
In an initial stage of game execution, a depth image may be created for recording depth values, which may be combined from (r, g, b, a). A depth image of size M N can be sampled by two-dimensional plane coordinates (M, N) to specify the depth values (r, g, b, a) stored in the pixels.
Referring to fig. 5, a depth image is schematically illustrated. The larger the size of the depth image is, the higher the occupied memory is, the higher the corresponding precision is, and the better the display quality of the final shadow is. For example, 1920 × 1080 pixel data are stored in a 1920 × 1080 depth image, each pixel in the rendering result displayed on the screen is sampled from the pixel data according to the mapping relationship, and if the display pixel of the screen is also 1920 × 1080, the mapping relationship between the display pixel and the depth image is a one-to-one correspondence relationship (that is, one display pixel corresponds to the pixel data in one depth image); if the display pixels of the screen are 3840 × 2160, a plurality of display pixels correspond to the pixel data of the same depth image, so that when the size of the depth image is larger, the more pixel data can be stored, the higher the sampling precision is, and the higher the final display quality is. The larger the depth image size, the more hardware resources (memory/video memory and bandwidth) need to be consumed.
In an alternative embodiment of the invention, the depth image may be set to 1024 × 1 in size. Referring to fig. 6, a schematic diagram of a depth image according to an embodiment of the present invention is shown, where the depth image stores 1024 × 1 pixel data.
In the frame cycle phase of the game execution, the depth image may be reset at the beginning of each frame cycle, i.e., the depth values recorded in the depth image are set to default values, e.g., to (1,0,0, 0).
When the game is running, a timing cycle mechanism, also called a frame rate, is provided, which represents the number of times of running in one second. If the game frame rate is 30 frames, it represents that the game main logic method is operated 30 times in one second. At the beginning of the frame cycle, representing the beginning of the main logic method, a method of emptying the depth image is performed, which is used to change all the data recorded in the depth image to default values. Since the shadow in the game rendering method provided by the embodiment of the invention needs to be calculated in real time, namely, each frame needs to calculate the current shadow range once, the result of the last calculation in the depth image needs to be reset once when each frame starts, and the calculation of the complete shadow range is carried out again.
After the depth value data recorded in the depth image is reset, the contour points of the blocking object output in the game screen making stage may be uploaded to the GPU in the form of line segments. Therefore, the contour information of the previously edited blocking object can be acquired in the GPU.
And a substep S12 of determining a visual field range of the game character.
In the embodiment of the invention, the game player can set different visual field ranges of the game character. The field of view may be arranged as a circle, or may be arranged as another shape, such as a rectangle, a square, a diamond, etc., i.e. may also be arranged as a figure including a center point. The game playing method can be enriched by self-defining the visual field range of the game role.
For sub-step S12, the following steps may be performed:
and taking the target pixel point as a circle center and a circle with a preset distance as a radius as the visual field range of the game role.
In an embodiment of the present invention, a circular area formed by taking a target pixel point where a game character is located as a center of a circle and taking a preset distance as a radius may be used as a visual field range of the game character.
In addition, a region having another shape (a rectangle, a square, a rhombus, or the like) may be constructed as the visual field region of the game character based on the target pixel point where the game character is located as the center point, and the shape of the region corresponding to the visual field region of the game character is not particularly limited in the present application.
And a substep S13 of determining, in a coordinate system with the target pixel point as a coordinate origin, a distance between a point on the contour line segment falling within the visual field range and the coordinate origin, and an angle between a line segment connecting the point on the contour line segment falling within the visual field range and the coordinate origin as end points and an X axis of the coordinate system.
In the embodiment of the invention, a coordinate system of the two-dimensional plane can be established by taking the target pixel point as the coordinate origin, the points on the contour line segment falling in the visual field range can be determined, and the distance between each point on the contour line segment falling in the visual field range and the coordinate origin is determined. Points on the contour line segment falling in the visual field range and the origin of coordinates can be used as endpoints to connect to obtain a plurality of corresponding line segments, and the included angle between each line segment and the X axis of the coordinate system is determined.
The above process may be understood as determining the line of sight direction of the game character and determining the distance between the blocking object and the game character in each line of sight direction.
And a substep S14 of establishing a mapping relation between the distance and the included angle, and drawing the depth image based on the mapping relation.
For each line segment, a mapping relation between the length (namely the distance between two end points) of the line segment and the included angle between the line segment and the X axis of the coordinate system is established, and the depth image is drawn based on the mapping relation.
For sub-step S14, the following steps may be performed:
and mapping the included angle to the abscissa of the depth image, and mapping the distance to the pixel value of the pixel point corresponding to the abscissa.
In the embodiment of the present invention, the included angle may be mapped to an abscissa of the depth image, and the distance may be mapped to a pixel value/depth value of a pixel point corresponding to the abscissa.
Fig. 7 is a model diagram showing a positional relationship between a game character and a blocking object. The position of the game character is used as the origin of coordinates O, the contour line segment of the blocking object is assumed to be AB, the field of view of the game character is assumed to be circular, and the radius of the field of view is assumed to be OR. As can be seen from the figure, the length of the line segment formed by the origin of coordinates O and any point on AB is less than OR, i.e., the points on AB can be determined to fall within the visual field of the game character, and any point on AB can be represented by D. The distance between point D and the origin of coordinates O can be calculated, and the angle between OD and the X-axis can be calculated. The included angle theta between the OD and the X axis and the distance L from the point D to the coordinate origin O can be correspondingly mapped into the depth image.
In one embodiment of the invention, the depth image is 1024 x 1 in size, and the range of abscissas [0,1024] of the depth image may be mapped to an angular range [0,2 π ]. Then, the corresponding abscissa of the calculated included angle θ between the line segment and the X axis of the coordinate system may be found in the depth image, and the distance L between the two end points of the calculated line segment may be written into the pixel value of the pixel point corresponding to the abscissa, for example, the distance L may be written into the r value of the pixel point, which is (L,0,0, 1).
Fig. 8 shows a depth image obtained by mapping according to the positional relationship in fig. 7. And each pixel point in the depth image records the depth value of the corresponding angle, and the depth values corresponding to the angles [0,2 pi ] are recorded together. For the segment AB, assuming that an included angle between the X axis and OA is α, a distance between OA endpoints corresponding to the included angle α is a, an included angle between the X axis and OB is β, and a distance between OB endpoints corresponding to the included angle β is b, an angle between the included angle α and the included angle β may be mapped to an abscissa of the depth image, and a distance corresponding to the angle is mapped to a pixel value/depth value on a pixel point corresponding to the abscissa.
In an optional embodiment of the present invention, for the step of mapping the distance to the pixel value of the pixel point corresponding to the abscissa, the following steps may be performed:
determining a currently stored distance value in pixel values of pixel points corresponding to the abscissa; judging whether the distance value corresponding to the distance is smaller than the currently stored distance value or not; if the distance value corresponding to the distance is smaller than the currently stored distance value, mapping the distance value corresponding to the distance to be the pixel value of the pixel point corresponding to the abscissa; otherwise, keeping the pixel value of the pixel point corresponding to the abscissa unchanged.
In the process of generating the depth image corresponding to the image frame, in the visual field range of the game character, the intersection generated between the sight line of the game character and the blocking object can be calculated, the intersection point with the shortest distance from the game character in the intersection points in the sight line direction is taken as a target intersection point, the distance between the target intersection point and the game character is recorded in the depth image, and the distance recorded in the depth image is ensured to be the shortest distance in the direction.
And 403, determining a target included angle between a line segment formed by connecting the pixel point to be rendered and the origin of coordinates as end points and an X axis of a coordinate system in the coordinate system taking the target pixel point as the origin of coordinates.
A coordinate system of a two-dimensional plane is established by taking the target pixel point as a coordinate origin, a pixel point to be rendered and the coordinate origin can be taken as endpoints to connect a line segment, and a target included angle between the line segment and an X axis of the coordinate system is determined.
The above process may be understood as determining a target sight direction in which the pixel point to be rendered is located.
Step 404, determining a target distance corresponding to the target included angle from the depth image based on the mapping relationship, and using the target distance as the depth information of the blocking object in the depth image.
The abscissa of each pixel point in the depth image corresponds to a preset angle, and a distance value is stored in the pixel value of the pixel point, so that a target angle corresponding to a target included angle can be determined from the preset angles, and the corresponding target pixel point can be found based on the target angle, so that a target distance value corresponding to the target distance stored in the target pixel point can be found.
In a pixel shader rendered by shadow Mesh, determining an included angle alpha 1 between a connecting line of any pixel point N to be rendered in a game picture and a coordinate origin O and an X axis, sampling the depth image generated in real time according to the alpha 1 to obtain a depth value H recorded in the depth image corresponding to the angle, comparing the depth value H with the length of ON, and if the length of ON is larger than that of H, representing that the pixel point to be rendered is in a shadow area, and performing corresponding shadow mixing. And repeating the step for all the pixel points to be rendered to obtain the final shadow effect of the image frame.
Step 405, if the distance value corresponding to the distance information is greater than the distance value corresponding to the depth information, performing shadow rendering on the pixel point to be rendered.
If the distance value corresponding to the distance information is larger than the distance value corresponding to the depth information, performing shadow rendering on the pixel point to be rendered; and if the distance value corresponding to the distance information is not greater than the distance value corresponding to the depth information, performing shadow rendering on the pixel point to be rendered.
After all the pixel points which are determined to be required to be subjected to shadow rendering in the image frame are rendered, the image frame can be displayed, and therefore the rendered shadow effect is displayed.
In order to enable those skilled in the art to better understand steps 401 to 405 of the embodiment of the present invention, the following description is provided by way of an example:
fig. 9 is a flowchart of a game rendering method according to an embodiment of the present invention, where the depth image may be a depth texture map, and the specific flow is as follows:
1. in the editing stage of creating the game screen, the outline of the blocking object or the blocking area may be edited.
2. During an initial phase of game play, a depth texture map may be created to record depth values (r, g, b, a).
3. At the beginning of each frame cycle during the frame cycle of game play, the depth texture map may be reset, i.e., the depth values recorded in the depth texture map are set to default values (1,0,0, 0).
4. And uploading the contour points of the blocking object output by the editing stage to the GPU in the form of line segments.
5. In the vertex shader, the position of the player character is taken as the origin of coordinates O, and assuming that the two endpoints of the contour line segment of the blocking object are A, B, the included angle α between the X axis and OA and the included angle β between the X axis and OB are calculated respectively.
6. In the pixel shader, the position of a player character is taken as a coordinate origin O, the radius R of the visual field range of the player character is taken as the length, interpolation is carried out from an included angle alpha to an included angle beta to obtain a line segment OR, the intersection D of the OR and the line segment AB is calculated, and the distance between the intersection D and the origin O is calculated.
7. And writing an included angle theta between the OD and the X axis and a distance L (depth value) from an intersection D to an origin O into the depth texture map. The abscissa range [0,1024] of the depth texture map is mapped to [0,2 π ]. That is, the included angle θ corresponds to a unique value of the abscissa of the depth texture map, and the distance L is taken as the value r in the depth value corresponding to the abscissa in the depth texture map, which is (L,0,0, 1).
8. And repeating the steps 5-7, traversing all contour line segments in the image frame, calculating the depth value of the corresponding included angle, writing the depth value into the depth texture map, and obtaining the final real-time depth texture map. Each grid in the depth texture map represents the depth value of the corresponding angle, and the depth values corresponding to [0,2 pi ] angles are recorded.
9. In a pixel shader rendered by the shadow Mesh, determining an included angle alpha 1 between a connecting line of a pixel point N to be rendered and a coordinate origin O and an X axis, sampling the depth texture map generated in real time according to the alpha 1 to obtain a depth value H recorded in the depth texture map corresponding to the angle, comparing the depth value H with the length of ON, and if the length of the ON is larger than the length of the H, representing that the pixel point to be rendered is in a shadow area, and performing corresponding shadow mixing. And repeating the step for all the pixel points to be rendered to obtain the final shadow area effect.
10. And repeating the steps 3-9 in each frame cycle during the running period of the game, so as to obtain the shadow area of each frame, namely the real-time dynamic shadow.
In summary, in the embodiments of the present invention, the distance information between the pixel point to be rendered and the target pixel point is compared with the depth information of the blocking object corresponding to the pixel point to be rendered in the depth image, so as to determine whether the pixel point to be rendered needs to be shadow rendered, and by using the above method, a rendering method is provided that determines which pixels are within the shadow rendering area range and which pixels are within the non-shadow rendering area range by using the depth image, and the method can be executed in a graphics processor, and is based on real-time shadow calculation of a GPU during operation, and transfers a large amount of calculation burden of a CPU to the GPU by using a strong parallel calculation capability of the GPU, so as to improve the overall operation efficiency of the game, and compare each pixel point to determine whether the shadow rendering is performed, so that the display accuracy of the final shadow area can reach the accuracy of a pixel level, the shadow display effect is greatly improved.
The method is based on that an engine editor edits a light blocking object for a two-dimensional scene in advance, and a graphic processor is used for calculating the collision detection between the sight of a game role and the blocking object in the visual field range of the game role during the running of a game so as to generate a real-time shadow area. Through preprocessing, the light blocking object of the two-dimensional scene is stored in a data form of point coordinates, real-time shadow calculation based on the GPU is performed during operation, a large amount of calculation burden of a CPU is transferred to the GPU by utilizing the powerful parallel calculation capacity of the GPU, the overall operation efficiency of the game is improved, the display precision of a final shadow area reaches the precision of a pixel level, the final display effect is greatly improved, and only a depth texture map with the size of 1024 x 1 is additionally consumed. All modern graphics API-based rendering can employ the above method to generate real-time shadows.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 10, a block diagram of a game rendering apparatus according to an embodiment of the present invention is shown, which may specifically include the following modules:
the first determining module 1001 is configured to determine a target pixel point of a game character in an image frame of a game image, and determine distance information between a pixel point to be rendered and the target pixel point;
a second determining module 1002, configured to determine a depth image corresponding to the image frame, and determine depth information of a blocking object in the depth image, where the blocking object corresponds to the pixel point to be rendered in the image frame;
a rendering module 1003, configured to perform shadow rendering on the pixel to be rendered if the distance value corresponding to the distance information is greater than the distance value corresponding to the depth information.
In an embodiment of the present invention, the second determining module includes:
the acquisition submodule is used for acquiring the contour information of the pre-edited blocking object; the contour information includes contour line segments connected in sequence by contour points;
the first determining submodule is used for determining the visual field range of the game role;
the second determining submodule is used for determining the distance between a point on the contour line segment falling into the visual field range and a coordinate origin in a coordinate system taking the target pixel point as the coordinate origin, and determining the included angle between a line segment formed by connecting the point on the contour line segment falling into the visual field range and the coordinate origin as end points and the X axis of the coordinate system;
and the drawing submodule is used for establishing a mapping relation between the distance and the included angle and drawing the depth image based on the mapping relation.
In an embodiment of the present invention, the first determining sub-module includes:
and the determining unit is used for taking a circle which takes the target pixel point as a circle center and takes a preset distance as a radius as the visual field range of the game role.
In an embodiment of the present invention, the rendering sub-module includes:
and the mapping unit is used for mapping the included angle to the abscissa of the depth image and mapping the distance to the pixel value of the pixel point corresponding to the abscissa.
In an embodiment of the present invention, the second determining module includes:
the third determining submodule is used for determining a target included angle between a line segment formed by connecting the pixel point to be rendered and the origin of coordinates as end points and an X axis of a coordinate system in the coordinate system taking the target pixel point as the origin of coordinates;
and the fourth determining submodule is used for determining a target distance corresponding to the target included angle from the depth image based on the mapping relation, and taking the target distance as the depth information of the blocking object in the depth image.
In an embodiment of the present invention, the mapping unit includes:
the determining subunit is used for determining a currently stored distance value in the pixel values of the pixel points corresponding to the abscissa;
the judging subunit is used for judging whether the distance value corresponding to the distance is smaller than the currently stored distance value;
a mapping subunit, configured to map, if the distance value corresponding to the distance is smaller than the currently stored distance value, the distance value corresponding to the distance to a pixel value of a pixel point corresponding to the abscissa; otherwise, keeping the pixel value of the pixel point corresponding to the abscissa unchanged.
In an embodiment of the invention, the apparatus is applied to a graphics processor, and the size of the depth image is 1024 x 1.
In summary, in the embodiments of the present invention, the distance information between the pixel point to be rendered and the target pixel point is compared with the depth information of the blocking object corresponding to the pixel point to be rendered in the depth image, so as to determine whether the pixel point to be rendered needs to be shadow rendered, and by using the above method, a rendering method is provided that determines which pixels are within the shadow rendering area range and which pixels are within the non-shadow rendering area range by using the depth image, and the method can be executed in a graphics processor, and is based on real-time shadow calculation of a GPU during operation, and transfers a large amount of calculation burden of a CPU to the GPU by using a strong parallel calculation capability of the GPU, so as to improve the overall operation efficiency of the game, and compare each pixel point to determine whether the shadow rendering is performed, so that the display accuracy of the final shadow area can reach the accuracy of a pixel level, the shadow display effect is greatly improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further provides an electronic device, including: the game rendering method comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein when the computer program is executed by the processor, each process of the game rendering method embodiment is realized, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned game rendering method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "include", "including" or any other variations thereof are intended to cover non-exclusive inclusion, so that a process, method, article, or terminal device including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such process, method, article, or terminal device. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The game rendering method, the game rendering device, the electronic device and the computer-readable storage medium provided by the present invention are described in detail above, and specific examples are applied herein to explain the principles and embodiments of the present invention, and the descriptions of the above embodiments are only used to help understand the method and the core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A game rendering method, the method comprising:
determining target pixel points of game roles in an image frame of a game picture, and determining distance information between pixel points to be rendered and the target pixel points;
determining a depth image corresponding to the image frame, and determining depth information of a blocking object corresponding to the pixel point to be rendered in the image frame in the depth image;
and if the distance value corresponding to the distance information is greater than the distance value corresponding to the depth information, performing shadow rendering on the pixel point to be rendered.
2. The method of claim 1, wherein the determining the depth image corresponding to the image frame comprises:
acquiring contour information of the blocking object edited in advance; the contour information includes contour line segments connected in sequence by contour points;
determining the visual field range of the game role;
determining the distance between a point on the contour line segment falling into the view range and a coordinate origin in a coordinate system taking the target pixel point as the coordinate origin, and determining the included angle between a line segment formed by connecting the point on the contour line segment falling into the view range and the coordinate origin as end points and the X axis of the coordinate system;
and establishing a mapping relation between the distance and the included angle, and drawing the depth image based on the mapping relation.
3. The method of claim 2, wherein determining the field of view of the game character comprises:
and taking the target pixel point as a circle center and a circle with a preset distance as a radius as the visual field range of the game role.
4. The method according to claim 2, wherein the establishing a mapping relationship between the distance and the included angle, and the drawing the depth image based on the mapping relationship comprises:
and mapping the included angle to the abscissa of the depth image, and mapping the distance to the pixel value of the pixel point corresponding to the abscissa.
5. The method of claim 2, wherein the determining depth information of a blocking object in the depth image corresponding to the pixel point to be rendered in the image frame comprises:
determining a target included angle between a line segment formed by connecting the pixel point to be rendered and the origin of coordinates as end points and an X axis of a coordinate system in the coordinate system taking the target pixel point as the origin of coordinates;
and determining a target distance corresponding to the target included angle from the depth image based on the mapping relation, and taking the target distance as the depth information of the blocking object in the depth image.
6. The method of claim 4, wherein said mapping said distance to a pixel value of a pixel point corresponding to said abscissa comprises:
determining a currently stored distance value in pixel values of pixel points corresponding to the abscissa;
judging whether the distance value corresponding to the distance is smaller than the currently stored distance value or not;
if the distance value corresponding to the distance is smaller than the currently stored distance value, mapping the distance value corresponding to the distance to be the pixel value of the pixel point corresponding to the abscissa; otherwise, keeping the pixel value of the pixel point corresponding to the abscissa unchanged.
7. The method of claim 1, wherein the method is applied to a graphics processor, and wherein the depth image has a size of 1024 x 1.
8. A game rendering apparatus, the apparatus comprising:
the first determining module is used for determining target pixel points of game characters in image frames of game images and determining distance information between pixel points to be rendered and the target pixel points;
the second determining module is used for determining a depth image corresponding to the image frame and determining depth information of a blocking object corresponding to the pixel point to be rendered in the image frame in the depth image;
and the rendering module is used for performing shadow rendering on the pixel point to be rendered if the distance value corresponding to the distance information is greater than the distance value corresponding to the depth information.
9. An electronic device, comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of a game rendering method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of a game rendering method as claimed in any one of claims 1 to 7.
CN202210599547.2A 2022-05-30 2022-05-30 Game rendering method and device, electronic equipment and medium Pending CN114904272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210599547.2A CN114904272A (en) 2022-05-30 2022-05-30 Game rendering method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210599547.2A CN114904272A (en) 2022-05-30 2022-05-30 Game rendering method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN114904272A true CN114904272A (en) 2022-08-16

Family

ID=82768446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210599547.2A Pending CN114904272A (en) 2022-05-30 2022-05-30 Game rendering method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114904272A (en)

Similar Documents

Publication Publication Date Title
CN110650368B (en) Video processing method and device and electronic equipment
CN110889890B (en) Image processing method and device, processor, electronic equipment and storage medium
TWI636423B (en) Method for efficient construction of high resolution display buffers
US7142709B2 (en) Generating image data
JP4481166B2 (en) Method and system enabling real-time mixing of composite and video images by a user
KR101145260B1 (en) Apparatus and method for mapping textures to object model
JP4071422B2 (en) Motion blur image drawing method and drawing apparatus
CN102768765A (en) Real-time soft shadow rendering method for point light sources
US11816788B2 (en) Systems and methods for a generating an interactive 3D environment using virtual depth
CN102572391B (en) Method and device for genius-based processing of video frame of camera
US6914612B2 (en) Image drawing method, image drawing apparatus, recording medium, and program
KR100610689B1 (en) Method for inserting moving picture into 3-dimension screen and record medium for the same
US6717575B2 (en) Image drawing method, image drawing apparatus, recording medium, and program
JP2003051025A (en) Method and device for plotting processing, recording medium with recorded plotting processing program, and plotting processing program
CN113546410B (en) Terrain model rendering method, apparatus, electronic device and storage medium
CN111167119B (en) Game development display method, device, equipment and storage medium
JP4513423B2 (en) Object image display control method using virtual three-dimensional coordinate polygon and image display apparatus using the same
CN114904272A (en) Game rendering method and device, electronic equipment and medium
CN115830210A (en) Rendering method and device of virtual object, electronic equipment and storage medium
CN115970275A (en) Projection processing method and device for virtual object, storage medium and electronic equipment
Gois et al. Interactive shading of 2.5 D models.
Oksanen 3D Interior environment optimization for VR
KR101859318B1 (en) Video content production methods using 360 degree virtual camera
CN115953520B (en) Recording and playback method and device for virtual scene, electronic equipment and medium
Chen et al. A quality controllable multi-view object reconstruction method for 3D imaging systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination