CN112891946B - Game scene generation method and device, readable storage medium and electronic equipment - Google Patents

Game scene generation method and device, readable storage medium and electronic equipment Download PDF

Info

Publication number
CN112891946B
CN112891946B CN202110276020.1A CN202110276020A CN112891946B CN 112891946 B CN112891946 B CN 112891946B CN 202110276020 A CN202110276020 A CN 202110276020A CN 112891946 B CN112891946 B CN 112891946B
Authority
CN
China
Prior art keywords
scene
scene element
game
target
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110276020.1A
Other languages
Chinese (zh)
Other versions
CN112891946A (en
Inventor
郑奇晖
梁成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110276020.1A priority Critical patent/CN112891946B/en
Publication of CN112891946A publication Critical patent/CN112891946A/en
Application granted granted Critical
Publication of CN112891946B publication Critical patent/CN112891946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

The disclosure relates to a game scene generation method, a game scene generation device, a readable storage medium and electronic equipment, and relates to the technical field of space rendering, wherein the method comprises the following steps: acquiring a first scene element included in a game scene, and performing preset processing on the first scene element to obtain a second scene element taking the first scene element as a light source; acquiring a third scene element and a pixel corresponding to the third scene element included in the game scene, and calculating a gradient value of the pixel; according to the gradient value and the normalized depth value of the second scene element, obtaining the edge of the third scene element, and superposing the edge and the second scene element to obtain the target third scene element; and rendering the first scene element, the second scene element and the target third scene element according to the rendering groups and the rendering levels corresponding to the first scene element, the second scene element and the target third scene element to obtain a target game scene. The present disclosure reduces the computational consumption of rendering.

Description

Game scene generation method and device, readable storage medium and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of space rendering, in particular to a game scene generation method, a game scene generation device, a readable storage medium and electronic equipment.
Background
In the current game, in order to embody the fineness and the sense of reality of the game scene, scene elements included in the game scene are rendered through different rendering schemes.
Specifically, when rendering scene elements, a 3D rendering scheme or a 2D rendering scheme can be selected, the 3D rendering scheme can display the scene elements included in the game scene in multiple view and multiple shots, and the 2D rendering scheme can reduce the calculation consumption of rendering.
However, when the 3D rendering scheme is selected, depth buffering is required, so that transparent pixels cannot be arranged on the edges of the scene elements, and saw teeth are easy to generate, even if edge tracing processing and transparent pixels adding can be performed on the edges of the scene elements, detail performance is insufficient, saw teeth can be generated and optimized through an antialiasing scheme, and performance consumption is high; when a 2D rendering scheme is selected, the orthogonal shots employed may result in a lack of immersion and realism of the scene elements.
Accordingly, it is desirable to provide a new game scene generation method.
It should be noted that the information of the present invention in the above background section is only for enhancing the understanding of the background of the present invention and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide a game scene generation method, a game scene generation device, a readable storage medium and an electronic device, so as to overcome the problems of high rendering calculation performance consumption and lack of realism of a game scene caused by the limitations and defects of the related art at least to a certain extent.
According to one aspect of the present disclosure, there is provided a game scene generation method including:
acquiring a first scene element included in a game scene, and performing preset processing on the first scene element to obtain a second scene element taking the first scene element as a light source;
acquiring a third scene element included in the game scene and a pixel corresponding to the third scene element, and calculating a gradient value of the pixel;
obtaining edges of the third scene elements according to the gradient values and the normalized depth values of the second scene elements, and superposing the edges and the second scene elements to obtain target third scene elements;
and rendering the first scene element, the second scene element and the target third scene element according to the rendering group and the rendering level corresponding to the first scene element, the second scene element and the target third scene element to obtain a target game scene.
In one exemplary embodiment of the present disclosure, acquiring a first scene element included in a game scene includes:
acquiring depth values of scene elements included in the game scene, and carrying out normalization processing on the depth values of the scene elements to obtain a normalized map;
and determining the mapping belonging to the first scene element in the normalized mapping according to a preset threshold value.
In an exemplary embodiment of the present disclosure, performing a preset process on the first scene element to obtain a second scene element using the first scene element as a light source, including:
Generating one or more first target scene element maps corresponding to the maps of the first scene elements;
And determining the direction of the light beam generated when the first scene element is a light source as a first direction, and carrying out preset processing on the one or more first target scene element maps based on the first direction to obtain a map of a second scene element taking the first scene element as the light source.
In an exemplary embodiment of the present disclosure, acquiring a third scene element included in the game scene and a pixel corresponding to the third scene element, calculating a gradient value of the pixel includes:
Acquiring a map of the third scene element included in the game scene and a target pixel of the map of the third scene element in the game scene;
And determining pixels in the preset range of the target pixel, and performing convolution calculation according to the transparent values of the pixels in the preset range of the target pixel to obtain the gradient value of the target pixel.
In an exemplary embodiment of the present disclosure, obtaining an edge of the third scene element from the gradient value and the normalized depth value of the second scene element includes:
acquiring a depth value of the third scene element, and determining a second scene element corresponding to the depth value of the third scene element according to the depth value of the third scene element;
Calculating a product between a normalized depth value of the second scene element corresponding to the depth value of the third scene element and the gradient value of the pixel;
And when the product is larger than a preset gradient threshold value, the pixel is the edge of the third scene element.
In an exemplary embodiment of the present disclosure, a target third scenario element is obtained, and the game scenario generating method further includes:
Acquiring attribute information of the third scene element, wherein the attribute information comprises the width and the height of the third scene element;
Dividing the game scene where the third scene element is located according to the attribute information to obtain a first scene area; the third scene element is located in the first scene area, and the first scene area comprises a plurality of first control points and a plurality of second control points;
obtaining the colors of the plurality of first control points, and carrying out positive film overlapping and bottom mixing on the colors of the plurality of first control points and the colors of the third scene elements;
And obtaining the colors of the plurality of second control points, and carrying out division mixing on the colors of the third scene elements obtained by the positive film bottom-folding mixing and the colors of the plurality of second control points to obtain the target third scene elements.
In an exemplary embodiment of the present disclosure, a target third scenario element is obtained, and the game scenario generating method further includes:
Determining a moving range of the third scene element in the game scene, determining the color of the moving range according to the first scene element and the second scene element included in the game scene, and generating a moving map;
Acquiring the position of the third scene in the game scene, and determining the color corresponding to the third scene element in the mobile map according to the position;
And overlapping the color feature corresponding to the third scene element in the mobile map to the third scene to obtain the target third scene element.
In an exemplary embodiment of the present disclosure, overlaying a color feature corresponding to the third scene element in the mobile map to the third scene to obtain the target third scene element includes:
Converting the moving map and the third scene element according to the preset width and height to obtain a target moving map and a converted third scene element;
converting the target movement map into a byte array, determining the position of the converted third scene element in the game scene, and acquiring a color value corresponding to the position of the converted third scene element in the game scene in the byte array;
And overlapping the color value positive film to the converted third scene element to obtain the target third scene element.
In an exemplary embodiment of the present disclosure, rendering the first, second, and target third scene elements according to a rendering group and a rendering hierarchy corresponding to the first, second, and target third scene elements to obtain a target game scene includes:
determining a rendering group and a rendering level to which the first scene element, the second scene element and the target third scene element belong according to the depth values of the first scene element, the second scene element and the target third scene element in the game scene;
And rendering the first scene element, the second scene element and the target third scene element according to the rendering group and the rendering level to obtain the target scene element.
According to one aspect of the present disclosure, there is provided a game scene display device including:
The second scene element generation module is used for acquiring a first scene element included in the game scene, and performing preset processing on the first scene element to obtain a second scene element taking the first scene element as a light source;
The gradient value calculation module is used for obtaining a third scene element included in the game scene and a pixel corresponding to the third scene element and calculating a gradient value of the pixel;
the target third scene element generation module is used for obtaining edges of the third scene elements according to the gradient values and the normalized depth values of the second scene elements, and superposing the edges with the second scene elements to obtain target third scene elements;
and the target game scene generation module is used for rendering the first scene element, the second scene element and the target third scene element according to the rendering group and the rendering level corresponding to the first scene element, the second scene element and the target third scene element so as to obtain a target game scene.
According to an aspect of the present disclosure, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the game scene generation method according to any one of the above-described exemplary embodiments.
According to one aspect of the present disclosure, there is provided an electronic device including:
A processor; and
A memory for storing executable instructions of the processor;
Wherein the processor is configured to perform the game scene generation method according to any one of the above-described exemplary embodiments via execution of the executable instructions.
According to the game scene generation method provided by the embodiment of the invention, on one hand, first scene elements included in a game scene are obtained, and the first scene elements are subjected to preset processing to obtain second scene elements taking the first scene elements as light sources; acquiring a third scene element included in the game scene and a pixel corresponding to the third scene element, and calculating a gradient value of the pixel; obtaining edges of the third scene elements according to the gradient values and the normalized depth values of the second scene elements, and superposing the edges and the second scene elements to obtain target third scene elements; rendering the first scene element, the second scene element and the target third scene element according to a rendering group and a rendering level corresponding to the first scene element, the second scene element and the target third scene element to obtain a target game scene, and determining the display sequence of the scene elements in the game scene according to the rendering group and the rendering level of the scene elements in the game scene instead of determining the shielding relation of the scene elements by using depth buffering in a 3D scheme, so that transparent pixels and saw teeth do not exist at the edges of the scene elements, and the performance consumption of rendering is reduced; on the other hand, as the second scene elements are normalized, the scene elements included in the game scene are converted into texture maps, the scene elements are displayed through the rendering groups and the rendering levels corresponding to the texture maps, and the advantages of the 2D scheme and the 3D scheme are combined, so that the resource consumption of rendering calculation is reduced, the rendering efficiency is improved, and meanwhile, the rendering effect is improved, so that the game scene has immersion feeling and reality feeling.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is evident that the drawings in the following description are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 schematically shows a flow chart of a game scene generation method according to an exemplary embodiment of the invention.
Fig. 2 schematically shows a block diagram of a game scene generation system according to an exemplary embodiment of the invention.
Fig. 3 schematically shows a flow chart of a method of generating a second scene element from a first scene element according to an example embodiment of the invention.
Fig. 4 schematically shows a flow chart of a method of calculating gradient values for pixels corresponding to a third scene element according to an example embodiment of the invention.
Fig. 5 schematically shows a flow chart of a method of deriving an edge of a third scene element from gradient values of pixels and normalized depth values of the second scene element according to an example embodiment of the invention.
Fig. 6 schematically shows a flow chart of a method of deriving a target third scene element according to an example embodiment of the invention.
FIG. 7 schematically illustrates a flowchart of a method for color superimposing third scene elements to simulate a light intensity variation in a game scene, according to an example embodiment of the invention.
Fig. 8 schematically shows a flow chart of a method of converting a movement map into a byte array and generating a target third scene element from the converted byte array according to an exemplary embodiment of the invention.
Fig. 9 schematically shows a flow chart of a method of rendering a scene element according to a rendering group and a rendering level according to an example embodiment of the invention.
Fig. 10 schematically shows a block diagram of a game scene generation apparatus according to an exemplary embodiment of the present invention.
Fig. 11 schematically shows an electronic device for implementing the above-described game scene generation method according to an exemplary embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known aspects have not been shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
In the prior art, when a semitransparent scene element is included in a game scene, firstly, a rendering type and a rendering pair column value are set in a shader, wherein the rendering pair column value represents the drawing sequence of the scene element, when the rendering is performed, all semitransparent scene elements are drawn after all opaque scene elements are drawn, and the situation that the scene element shielded by the opaque scene elements is not rendered is ensured by setting depth writing and depth testing, but the situation that the edges of the rendered scene element cannot have transparent pixels and are easy to generate saw teeth is caused by the way of depth writing and depth testing, in order to solve the problem, the scene element can be traced and the pixels with transparent edges are added, but the detail of the scene element is not rich, the saw teeth can be optimized by an antialiasing scheme, and the performance consumption is high.
Based on one or more of the above problems, in this exemplary embodiment, there is provided a game scenario generation method, which may be executed on a device terminal, where the device terminal may include a PC end, a mobile end, and the like; of course, those skilled in the art may also operate the method of the present invention on other platforms as required, and this is not a particular limitation in the present exemplary embodiment. Referring to fig. 1, the game scene generation method may include the steps of:
S110, acquiring a first scene element included in a game scene, and performing preset processing on the first scene element to obtain a second scene element taking the first scene element as a light source;
S120, acquiring a third scene element and a pixel corresponding to the third scene element, which are included in the game scene, and calculating a gradient value of the pixel;
S130, according to the gradient value and the normalized depth value of the second scene element, obtaining an edge of the third scene element, and superposing the edge and the second scene element to obtain a target third scene element;
and S140, rendering the first scene element, the second scene element and the target third scene element according to the rendering groups and the rendering levels corresponding to the first scene element, the second scene element and the target third scene element to obtain a target game scene.
According to the game scene generation method, on one hand, first scene elements included in a game scene are obtained, and the first scene elements are subjected to preset processing to obtain second scene elements taking the first scene elements as light sources; acquiring a third scene element included in the game scene and a pixel corresponding to the third scene element, and calculating a gradient value of the pixel; obtaining edges of the third scene elements according to the gradient values and the normalized depth values of the second scene elements, and superposing the edges and the second scene elements to obtain target third scene elements; rendering the first scene element, the second scene element and the target third scene element according to a rendering group and a rendering level corresponding to the first scene element, the second scene element and the target third scene element to obtain a target game scene, and determining the display sequence of the scene elements in the game scene according to the rendering group and the rendering level of the scene elements in the game scene instead of determining the shielding relation of the scene elements by using depth buffering in a 3D scheme, so that transparent pixels and saw teeth do not exist at the edges of the scene elements, and the performance consumption of rendering is reduced; on the other hand, as the second scene elements are normalized, the scene elements included in the game scene are converted into texture maps, the scene elements are displayed through the rendering groups and the rendering levels corresponding to the texture maps, and the advantages of the 2D scheme and the 3D scheme are combined, so that the resource consumption of rendering calculation is reduced, the rendering efficiency is improved, and meanwhile, the rendering effect is improved, so that the game scene has immersion feeling and reality feeling.
Hereinafter, each step involved in the game scene generation method of the exemplary embodiment of the present disclosure is explained and explained in detail.
First, an application scenario and an object of the exemplary embodiment of the present disclosure are explained and explained.
Specifically, the embodiment of the disclosure may be used for a game client of a user, and after the user enters a game scene, the light rays of scene elements included in the game scene may be adjusted according to illumination changes at different positions in the game scene, so that the immersion and realism of the game scene are improved.
According to the embodiment of the invention, the texture map obtained after the scene element normalization is used as a basis, the texture maps of different scene elements are processed to obtain the target scene elements, the display sequence of each scene element in the game scene is determined according to the rendering group and the rendering level of the target scene elements in the game scene, and the scene elements are rendered according to the sequence, so that the resource consumption of rendering calculation is reduced, the rendering effect of the game scene is improved, and the game scene has immersion and reality. The specific reasons are as follows: on the one hand, the texture map of the first scene element is obtained, the texture map of the first scene element is used as a light source, a plurality of texture maps corresponding to the texture map of the first scene element are shifted, the shifting direction is opposite to the direction of the light source, so that the light beam map taking the first scene element as the light source is obtained, and the sense of reality of the second scene element when being displayed in a game scene is improved; on the other hand, the rendering group and the rendering level of the scene element in the game scene are determined, and the display relation of the scene element in the game scene is determined according to the rendering group and the rendering level of the scene element instead of using depth buffering, so that transparent pixels can exist at the edge of the scene element, and meanwhile, saw teeth are not generated, thereby reducing the resource consumption of rendering calculation.
Next, explanation and explanation are made of a game scene display system involved in an exemplary embodiment of the present disclosure. Referring to fig. 2, the game scene display system may include: a game character generation module 210, a map acquisition module 220, a shader 230, and a user client 240. Wherein, the game role generation module 210 is configured to generate a third scene element included in the game scene, that is, a game role, using the 2D skeletal animation editing tool, where the generated third scene element is a texture map of the third scene element; the map obtaining module 220 is connected to the game role generating module 210 in a network manner, and is configured to obtain a texture map of a scene element included in a game scene, and normalize a depth value of the stereoscopic scene element when the game scene includes the stereoscopic scene element, thereby obtaining the texture map of the stereoscopic scene element; a shader 230, which is connected to the map obtaining module 220 in a network manner, and is configured to determine a display order of the scene elements according to a rendering group and a rendering level of the texture map of each scene element in the game scene, and render the scene elements according to the display order; the user client 240 is connected to the shader 230 in a network manner, and is used for displaying the rendered scene elements.
Hereinafter, the steps S110 to S140 will be explained and described in detail with reference to fig. 2.
In step S110, a first scene element included in a game scene is acquired, and a preset process is performed on the first scene element, so as to obtain a second scene element using the first scene element as a light source.
Wherein, the game scene can comprise one or more scene elements; the preset processing is to offset the first scene element; the first scene element is a light source, which may be a natural light source, for example: sun, flame, lightning, etc., may also be artificial light sources, such as: a lighted candle, a lighted electric lamp, or the like, the light source is not particularly limited in this example embodiment; the preset processing of the first scene element may be: performing offset processing on the first scene element along the direction of generating the light beam when the first scene is a light source; the second scene element is a beam map generated by taking the first scene element as a light source.
In the present exemplary embodiment, acquiring a first scene element included in a game scene includes:
acquiring depth values of scene elements included in the game scene, and carrying out normalization processing on the depth values of the scene elements to obtain a normalized map;
and determining the mapping belonging to the first scene element in the normalized mapping according to a preset threshold value.
By converting the stereoscopic scene elements into normalized texture maps, the computational performance consumption in rendering is reduced. Specifically, firstly, obtaining depth values of three-dimensional scene elements included in a game scene, wherein the depth values are distances between pixel points and a camera in a three-dimensional world, and the greater the depth values, the more distant the pixel points are from the camera; secondly, carrying out normalization processing on the depth values of the scene elements to obtain a game scene normalized texture map, wherein in the game scene normalized texture map, the maximum depth value is 1, the minimum depth value is 0, and in the normalized texture map, the scene element with the maximum depth value can be positioned at the center of the texture map, or the scene element with the minimum depth value can be positioned at the center of the texture map, and the specific limitation is not made in the example; and determining the texture map belonging to the first scene element in the normalized texture map of the game scene, namely the texture map belonging to the light source according to a preset depth threshold after obtaining the normalized texture map of the game scene, wherein the preset depth threshold can be 0.9 or 0.8, and the preset depth threshold is not particularly limited in the application.
Referring to fig. 3, after obtaining the texture map of the first scene element included in the game scene normalized texture map, that is, the texture map of the light source, the second scene element may be generated according to the first scene element, which may specifically include step S310 and step S320:
In step S310, one or more first target scene element maps corresponding to the maps of the first scene element are generated;
In step S320, it is determined that the direction of the light beam generated when the first scene element is the light source is the first direction, and the one or more first target scene element maps are subjected to preset processing based on the first direction, so as to obtain a map of a second scene element using the first scene element as the light source.
Hereinafter, step S310 and step S320 will be explained and explained. Specifically, the second scene element is a light beam map generated when the first scene element is a light source, so when the second scene element is generated, firstly, one or more texture maps corresponding to the first scene element need to be generated according to the texture maps of the first scene element, wherein, in order to enable the generated second scene element to have a sense of reality and immersion, when the number of the generated texture maps is 1, the width and the height of the texture maps are larger than those of the first scene texture maps, when the number of the generated texture maps is multiple, the width and the height of each texture map are different and have increasing trend, for example, when the width and the height of two texture maps and the first scene element are respectively 5 pixels and 7 pixels, the width and the height of the generated first texture map can be: 7 pixels, 9 pixels; the width, height of the second texture map may be: the specific width and height of the generated texture map is not limited in this example, but needs to be in an increasing trend, and one skilled in the art can set different widths and heights for different texture maps as needed.
Secondly, after obtaining the texture map of one or more scene elements corresponding to the first scene element, determining a direction of a light beam formed by the one or more scene elements, that is, determining the direction of the light beam generated when the first scene element is a light source, and after determining the direction of the light beam generated when the first scene element is a light source, offsetting the texture map of the one or more scene elements generated according to the direction of the light beam to obtain a texture map of the second scene element, wherein an offset distance between the plurality of texture maps generated in the texture map of the second scene element may be a fixed value or an arbitrary value, which is not limited in this embodiment.
In step S120, a third scene element included in the game scene and a pixel corresponding to the third scene element are acquired, and a gradient value of the pixel is calculated.
Wherein the third scene element is a game character included in the game scene, the game character may be drawn by using a spine (2D skeletal animation editing tool for the game), in order to save computation consumption in rendering, one or more game characters included in the game scene are drawn on a rendering target (RENDER TARGET), wherein the rendering target is a feature of a graphics processing unit, the rendering target is allowed to render the stereoscopic scene to a storage buffer or a rendering target texture, and then the rendering target texture is processed by a shader so as to apply other effects to the final image before the final image is displayed, and a texture map of the game character drawn by using the spine generates a transparency value; the gradient value of the pixel corresponding to the third scene element is calculated, that is, edge detection is performed on the third scene element, and a Sobel (Sobel) edge detection operator may be adopted, a Prewitt (edge detection of first-order differential operator) edge detection operator may be adopted, a Roberts (searching for an edge by using a local differential operator) edge detection operator may be adopted, and the edge detection algorithm is not particularly limited in this example.
In this exemplary embodiment, referring to fig. 4, a third scene element and a pixel corresponding to the third scene element are acquired, and a gradient value of the pixel is calculated, including step S410 and step S420:
in step S410, a map of the third scene element included in the game scene and a target pixel of the map of the third scene element in the game scene are acquired;
In step S420, a pixel within the preset range of the target pixel is determined, and a convolution calculation is performed according to the transparent value of the pixel within the preset range of the target pixel, so as to obtain a gradient value of the target pixel.
Hereinafter, step S410 and step S420 will be explained and explained. Specifically, firstly, all game characters are drawn on a blank rendering target to be used for extracting the edge information of the game characters; secondly, acquiring pixel information of the game character, and performing convolution calculation according to the transparent value of the pixel included in the preset range of the pixel, wherein the preset range can be a 3*3 range taking the target pixel as the center or a 4*4 range taking the target pixel as the center, and the preset range is not specially limited in the example. For example, when the edge detection algorithm is a Sobel operator and the preset range is 3*3 ranges centered on the target pixel, since Sobel filtering is to check the gradient change of color or brightness in the first order direction, two convolutions are required in the horizontal and vertical directions, and the two convolution kernels are respectively:
Wherein, a is 3*3 matrix composed of pixels included in the preset range of the target pixel, the value of the matrix a is the transparent value of the pixels included in the preset range of the target pixel, G x represents the approximate value of gray scale bias in the horizontal direction, and G y represents the approximate value of gray scale bias in the vertical direction. After G x and G y are obtained, an approximate gradient G, i.e., a gradient value of the target pixel, can be obtained from G x and G y.
In this example embodiment, the pointing of the third scene element is guaranteed to be located on the edge of the third scene element texture map by using the transparency values of the pixels included in the preset range instead of the depth values.
In step S130, according to the gradient value and the normalized depth value of the second scene element, an edge of the third scene element is obtained, and the edge and the second scene element are overlapped to obtain a target third scene element.
The gradient value is the approximate gradient obtained in step S120, the normalized depth value of the second scene element is the normalized depth value of the second scene element which is the same as the depth value of the third scene element, the edge of the third scene element is the place where the brightness changes significantly in the texture map, and the edge and the second scene element are superimposed, i.e. the edge pointing of the game character is superimposed on the texture map of the second scene element.
In this exemplary embodiment, referring to fig. 5, according to the gradient value and the normalized depth value of the second scene element, an edge of the third scene element is obtained, which includes steps S510-S530:
In step S510, a depth value of the third scene element is obtained, and a second scene element corresponding to the depth value of the third scene element is determined according to the depth value of the third scene element;
in step S520, calculating a product between a normalized depth value of the second scene element corresponding to the depth value of the third scene element and the gradient value of the pixel;
in step S530, when the product is greater than a preset gradient threshold, the pixel is an edge of the third scene element.
Hereinafter, step S510 to step S530 will be explained and explained. Specifically, first, determining a depth value of a pixel included in a game character in a game scene, and simultaneously determining a pixel of a second scene element corresponding to the depth value of the pixel included in the game character and a normalized depth value of the pixel of the second scene element; secondly, calculating the product between the gradient value of the pixel included in the game role and the normalized depth value of the pixel of the second scene element, and when the product is larger than a preset gradient threshold, considering the pixel included in the game role as the edge of the game role, wherein the preset gradient threshold can be 0.8 or 0.9, and the preset gradient threshold is not particularly limited in the example.
In this exemplary embodiment, referring to fig. 6, a target third scene element is obtained, and the game scene generating method further includes steps S610 to S640:
in step S610, attribute information of the third scene element is acquired, where the attribute information includes a width and a height of the third scene element;
In step S620, the game scene where the third scene element is located is divided according to the attribute information, so as to obtain a first scene area; the third scene element is located in the first scene area, and the first scene area comprises a plurality of first control points and a plurality of second control points;
In step S630, the colors of the plurality of first control points are obtained, and positive film-overlapping and bottom-overlapping mixing are performed on the colors of the plurality of first control points and the colors of the third scene element;
in step S640, the colors of the plurality of second control points are obtained, and the colors of the third scene element obtained by the feature film bottom-overlapping mixing and the colors of the plurality of second control points are divided and mixed to obtain the target third scene element.
Hereinafter, step S610 to step S640 will be explained and explained. Specifically, the game scene where the third scene element is located is divided, the rectangular area of the game character in the game scene texture map can be determined according to the width and the height of the game character, the rectangular area of the game character in the game scene can be adjusted through the editor, and the division mode of the game scene is not specifically limited in this example. The adjusting of the light of the second scene element included in the game scene may include: firstly, a third scene element, namely the width and the height of a game character, included in a game scene, and determining a rectangular area of the game character in the game scene according to the width and the height of the game character; secondly, setting a plurality of first control points and a plurality of second control points based on the rectangular area, wherein each control point is provided with a color, and the first control points comprise color gradual changes of the colors provided by the plurality of control points; the number of the control points included in the first control point and the second control point is not less than 2; thirdly, acquiring a plurality of colors included in the first control point, and mixing the gradient colors of the plurality of control points included in the first control point onto the game character in a positive overlapping mode, wherein a positive overlapping formula can be as follows:
final_color=float4(lerp(color.xyz,color.xyz*blend_color.xyz,float3(blend_color.w)),color.w)
The final_color is the color of the game role obtained after the positive film is overlapped; four components of color: the red, green, blue, and transparent values may be represented by x, y, z, w, respectively, or may be represented by other labels, which are not particularly limited in this example; lerp is an interpolation function that can be used when in a game scene in order to have a buffering effect on the light of a game character; color, xyz, color, w represent color components of the game character pixel points, and blend_color, w represent color components of the first control point corresponding to the game character pixel points; finally, a plurality of colors included in the second control point are obtained, the colors of the plurality of control points included in the second control point are overlapped on the game role which is overlapped by the positive film in a dividing and mixing mode, so that the brightness of the game role is adjusted, wherein the dividing and mixing formula can be as follows:
final_color=float4(color.xyz/blend_color.xyz,color.w)
The final_color is the color of the game character obtained after division and mixing, color. Xyz and color. W represent color components of the pixel points of the game character after positive film bottom stacking, and blend_color. Xyz represents color components of the second control point corresponding to the pixel points of the game character.
Further, referring to fig. 7, when a game character moves in a game scene, since the scene character includes a second scene element using the first scene element as a light source, the light of the game character is always changed along with the movement of the game character in the game scene, so that it is necessary to superimpose a color according to the position of the game character to simulate the change of brightness of the light in the game scene, specifically comprising steps S710-S730:
In step S710, determining a movement range of the third scene element in the game scene, determining a color of the movement range according to the first scene element and the second scene element included in the game scene, and generating a movement map;
In step S720, a position of the third scene in the game scene is obtained, and a color corresponding to the third scene element is determined in the mobile map according to the position;
In step S730, the color feature corresponding to the third scene element in the motion map is overlaid onto the third scene, so as to obtain the target third scene element.
Hereinafter, step S710 to step S730 will be explained and explained. In particular, since the game character moves within the texture map of the game scene, i.e., moves within a rectangular plane, a superimposed color for each position can be recorded with a rectangular map. Firstly, acquiring a moving range of a game character in a game scene, and combining a light source included in the game scene and a light beam emitted by the light source in the moving range to generate a moving map of the game character in the game scene, wherein the moving map is used for simulating the brightness change of the game scene; secondly, determining the position of the game role in the game scene, and determining the color point corresponding to the game role in the moving map according to the position of the game role in the game scene; finally, the color point feature in the mobile map is overlaid onto the game character, wherein the formula of feature overlay can be:
final_color=float4(lerp(color.xyz,color.xyz*multiply_color.xyz,float3(multiply_color.w)),color.w)
The final_color is the color of the color point positive film in the moving map, color. Xyz and color. W represent the color components of the pixel points of the game role after positive film lamination and division mixing, and multiply _color. Xyz and multiply _color. W represent the color components of the moving map corresponding to the pixel points of the game role.
Still further, referring to fig. 8, in order to reduce the sampling number of the mobile map, the mobile map may be offline converted into a byte array, which may include steps S810-S830:
in step S810, the moving map and the third scene element are converted according to the preset width and height, so as to obtain a target moving map and a converted third scene element;
In step S820, the target movement map is converted into a byte array, the position of the converted third scene element in the game scene is determined, and a color value corresponding to the position of the converted third scene element in the game scene is acquired in the byte array;
in step S830, the color value feature is superimposed onto the converted third scene element, so as to obtain the target third scene element.
Specifically, first, before converting a movement map into a byte array, it is necessary to unify the movement map and the map of a game character, which specifically includes: the method comprises the steps of performing unified height and width conversion on texture maps of a mobile map and a game character according to preset widths and heights, wherein the preset widths and heights can be 1024 x 128 or 2048 x 256, the preset widths and heights are not limited in the example, when the preset widths and heights are 1024 x 128, the heights and the widths of the mobile map are uniformly set to 1024 x 128, and the positions of the game character in a game scene are converted to be within the 1024 x 128 range; secondly, converting the mobile map into a byte array, and storing the mobile map in the form of the array after the mobile map is loaded into a game; thirdly, determining the position of the game character in the game scene, and acquiring the pixel points of the movement map corresponding to the position of the game character in the game scene; and finally, determining a color value corresponding to the pixel point in the array according to the pixel point of the moving map, and overlapping the color value on the game role to obtain the target game role.
In step S140, the first scene element, the second scene element and the target third scene element are rendered according to the rendering groups and the rendering levels corresponding to the first scene element, the second scene element and the target third scene element, so as to obtain a target game scene.
The rendering group and the rendering hierarchy are used to determine a display order of the scene elements in the game scene, specifically, referring to fig. 9, rendering is performed according to the rendering group and the rendering hierarchy corresponding to the first scene element, the second scene element and the target third scene element, so as to obtain the target game scene, which includes step S910 and step S920:
In step S910, determining a rendering group and a rendering level to which the first scene element, the second scene element, and the target third scene element belong according to the depth values of the first scene element, the second scene element, and the target third scene element in the game scene;
In step S920, the first scene element, the second scene element, and the target third scene element are rendered according to the rendering group and the rendering hierarchy, so as to obtain the target scene element.
Hereinafter, step S910 and step S920 will be explained and explained. Specifically, first, a depth value of a scene element included in a game scene is obtained, and a rendering group of the scene element in the game scene is determined according to the depth value of the scene element, wherein the rendering group may include: the rendering groups are not particularly limited in the example, and a person skilled in the art can set different rendering groups according to the requirements of game scenes, after determining the rendering groups to which the scene elements belong, the person skilled in the art determines the rendering levels of the scene elements in the rendering groups according to the depth values of the scene elements, when the rendering groups of different scene elements are the same, the larger the values of the rendering levels are, the more upper the scene elements are rendered through the shader; when the rendering groups of the scene elements are different, the larger the rendering group is, the more the scene elements are on the upper layer when the rendering is performed through the shader; and finally, rendering the scene elements included in the game scene according to the rendering group and the rendering level of the scene elements included in the game scene to obtain the target game scene.
The game scene generation method and the game scene generation system provided by the example embodiments of the present disclosure have at least the following advantages: generating a texture map of the scene elements, and rendering different scene elements in the game scene according to a rendering group and a rendering level of the scene elements in the game scene, so that the resource consumption of rendering calculation is reduced, and the rendering efficiency is improved; generating a moving map according to the brightness change of the game characters when the game characters are moved in the game scene, and converting the map into a byte array, so that the number of map sampling times is reduced; and rendering the scene elements according to the rendering groups and rendering levels of the scene elements in the game scene, so that the edges of the scene elements can generate transparent pixels without generating saw teeth, and the performance consumption of rendering is reduced.
The exemplary embodiment of the present invention also provides a game scene generation apparatus, which may include, with reference to fig. 10: a second scene element generation module 1010, a gradient value calculation module 1020, a target third scene element generation module 1030, and a target game scene generation module 1040. Wherein:
A second scene element generating module 1010, configured to obtain a first scene element included in a game scene, and perform a preset process on the first scene element to obtain a second scene element using the first scene element as a light source;
a gradient value calculating module 1020, configured to obtain a third scene element included in the game scene and a pixel corresponding to the third scene element, and calculate a gradient value of the pixel;
A target third scene element generating module 1030, configured to obtain an edge of the third scene element according to the gradient value and the normalized depth value of the second scene element, and superimpose the edge and the second scene element to obtain a target third scene element;
The target game scene generation module 1040 is configured to render the first scene element, the second scene element, and the target third scene element according to a rendering group and a rendering hierarchy corresponding to the first scene element, the second scene element, and the target third scene element, so as to obtain a target game scene.
The specific details of each module in the above game scene generating device are described in detail in the corresponding game scene generating method, so that the details are not repeated here.
In one exemplary embodiment of the present disclosure, acquiring a first scene element included in a game scene includes:
acquiring depth values of scene elements included in the game scene, and carrying out normalization processing on the depth values of the scene elements to obtain a normalized map;
and determining the mapping belonging to the first scene element in the normalized mapping according to a preset threshold value.
In an exemplary embodiment of the present disclosure, performing a preset process on the first scene element to obtain a second scene element using the first scene element as a light source, including:
Generating one or more first target scene element maps corresponding to the maps of the first scene elements;
And determining the direction of the light beam generated when the first scene element is a light source as a first direction, and carrying out preset processing on the one or more first target scene element maps based on the first direction to obtain a map of a second scene element taking the first scene element as the light source.
In an exemplary embodiment of the present disclosure, acquiring a third scene element included in the game scene and a pixel corresponding to the third scene element, calculating a gradient value of the pixel includes:
Acquiring a map of the third scene element included in the game scene and a target pixel of the map of the third scene element in the game scene;
And determining pixels in the preset range of the target pixel, and performing convolution calculation according to the transparent values of the pixels in the preset range of the target pixel to obtain the gradient value of the target pixel.
In an exemplary embodiment of the present disclosure, obtaining an edge of the third scene element from the gradient value and the normalized depth value of the second scene element includes:
acquiring a depth value of the third scene element, and determining a second scene element corresponding to the depth value of the third scene element according to the depth value of the third scene element;
Calculating a product between a normalized depth value of the second scene element corresponding to the depth value of the third scene element and the gradient value of the pixel;
And when the product is larger than a preset gradient threshold value, the pixel is the edge of the third scene element.
In an exemplary embodiment of the present disclosure, a target third scenario element is obtained, and the game scenario generating method further includes:
Acquiring attribute information of the third scene element, wherein the attribute information comprises the width and the height of the third scene element;
Dividing the game scene where the third scene element is located according to the attribute information to obtain a first scene area; the third scene element is located in the first scene area, and the first scene area comprises a plurality of first control points and a plurality of second control points;
obtaining the colors of the plurality of first control points, and carrying out positive film overlapping and bottom mixing on the colors of the plurality of first control points and the colors of the third scene elements;
And obtaining the colors of the plurality of second control points, and carrying out division mixing on the colors of the third scene elements obtained by the positive film bottom-folding mixing and the colors of the plurality of second control points to obtain the target third scene elements.
In an exemplary embodiment of the present disclosure, a target third scenario element is obtained, and the game scenario generating method further includes:
Determining a moving range of the third scene element in the game scene, determining the color of the moving range according to the first scene element and the second scene element included in the game scene, and generating a moving map;
Acquiring the position of the third scene in the game scene, and determining the color corresponding to the third scene element in the mobile map according to the position;
And overlapping the color feature corresponding to the third scene element in the mobile map to the third scene to obtain the target third scene element.
In an exemplary embodiment of the present disclosure, overlaying a color feature corresponding to the third scene element in the mobile map to the third scene to obtain the target third scene element includes:
Converting the moving map and the third scene element according to the preset width and height to obtain a target moving map and a converted third scene element;
converting the target movement map into a byte array, determining the position of the converted third scene element in the game scene, and acquiring a color value corresponding to the position of the converted third scene element in the game scene in the byte array;
And overlapping the color value positive film to the converted third scene element to obtain the target third scene element.
In an exemplary embodiment of the present disclosure, rendering the first, second, and target third scene elements according to a rendering group and a rendering hierarchy corresponding to the first, second, and target third scene elements to obtain a target game scene includes:
determining a rendering group and a rendering level to which the first scene element, the second scene element and the target third scene element belong according to the depth values of the first scene element, the second scene element and the target third scene element in the game scene;
And rendering the first scene element, the second scene element and the target third scene element according to the rendering group and the rendering level to obtain the target scene element.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods of the present invention are depicted in the accompanying drawings in a particular order, this is not required to or suggested that the steps must be performed in this particular order or that all of the steps shown be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
In an exemplary embodiment of the present invention, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 1100 according to this embodiment of the invention is described below with reference to fig. 11. The electronic device 1100 shown in fig. 11 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 11, the electronic device 1100 is embodied in the form of a general purpose computing device. Components of electronic device 1100 may include, but are not limited to: the at least one processing unit 1110, the at least one memory unit 1120, a bus 1130 connecting the different system components (including the memory unit 1120 and the processing unit 1110), and a display unit 1140.
Wherein the storage unit stores program code that is executable by the processing unit 1110 such that the processing unit 1110 performs steps according to various exemplary embodiments of the present invention described in the above-described "exemplary methods" section of the present specification. For example, the processing unit 1110 may perform step S110 as shown in fig. 1: acquiring a first scene element included in a game scene, and performing preset processing on the first scene element to obtain a second scene element taking the first scene element as a light source; s120: acquiring a third scene element included in the game scene and a pixel corresponding to the third scene element, and calculating a gradient value of the pixel; s130: obtaining edges of the third scene elements according to the gradient values and the normalized depth values of the second scene elements, and superposing the edges and the second scene elements to obtain target third scene elements; s140: and rendering the first scene element, the second scene element and the target third scene element according to the rendering group and the rendering level corresponding to the first scene element, the second scene element and the target third scene element to obtain a target game scene.
The storage unit 1120 may include a readable medium in the form of a volatile storage unit, such as a Random Access Memory (RAM) 11201 and/or a cache memory 11202, and may further include a Read Only Memory (ROM) 11203.
Storage unit 1120 may also include a program/utility 11204 having a set (at least one) of program modules 11205, such program modules 11205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The bus 1130 may be a local bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a bus using any of a variety of bus architectures.
The electronic device 1100 may also communicate with one or more external devices 1200 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1100, and/or any devices (e.g., routers, modems, etc.) that enable the electronic device 1100 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1150. Also, electronic device 1100 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 1160. As shown, network adapter 1160 communicates with other modules of electronic device 1100 via bus 1130. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1100, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present invention may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present invention.
In an exemplary embodiment of the present invention, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present invention may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

Claims (11)

1. A game scene generation method, characterized by comprising:
acquiring first scene elements included in a game scene, and generating one or more first target scene element maps corresponding to the maps of the first scene elements; determining the direction of a light beam generated when the first scene element is a light source as a first direction, and performing offset processing on the one or more first target scene element maps based on the first direction to obtain a map of a second scene element taking the first scene element as the light source;
acquiring a third scene element included in the game scene and a pixel corresponding to the third scene element, and calculating a gradient value of the pixel;
obtaining edges of the third scene elements according to the gradient values and the normalized depth values of the second scene elements, and superposing the edges and the second scene elements to obtain target third scene elements;
and rendering the first scene element, the second scene element and the target third scene element according to the rendering group and the rendering level corresponding to the first scene element, the second scene element and the target third scene element to obtain a target game scene.
2. The game scene generation method according to claim 1, wherein acquiring the first scene element included in the game scene comprises:
acquiring depth values of scene elements included in the game scene, and carrying out normalization processing on the depth values of the scene elements to obtain a normalized map;
and determining the mapping belonging to the first scene element in the normalized mapping according to a preset threshold value.
3. The game scene generation method according to claim 1, wherein acquiring a third scene element included in the game scene and a pixel corresponding to the third scene element, calculating a gradient value of the pixel, comprises:
Acquiring a map of the third scene element included in the game scene and a target pixel of the map of the third scene element in the game scene;
And determining pixels in the preset range of the target pixel, and performing convolution calculation according to the transparent values of the pixels in the preset range of the target pixel to obtain the gradient value of the target pixel.
4. A game scene generation method according to claim 3, wherein deriving edges of the third scene element from the gradient values and the normalized depth values of the second scene element comprises:
acquiring a depth value of the third scene element, and determining a second scene element corresponding to the depth value of the third scene element according to the depth value of the third scene element;
Calculating a product between a normalized depth value of the second scene element corresponding to the depth value of the third scene element and the gradient value of the pixel;
And when the product is larger than a preset gradient threshold value, the pixel is the edge of the third scene element.
5. The game scene generation method according to claim 4, wherein a target third scene element is obtained, the game scene generation method further comprising:
Acquiring attribute information of the third scene element, wherein the attribute information comprises the width and the height of the third scene element;
Dividing the game scene where the third scene element is located according to the attribute information to obtain a first scene area; the third scene element is located in the first scene area, and the first scene area comprises a plurality of first control points and a plurality of second control points;
obtaining the colors of the plurality of first control points, and carrying out positive film overlapping and bottom mixing on the colors of the plurality of first control points and the colors of the third scene elements;
And obtaining the colors of the plurality of second control points, and carrying out division mixing on the colors of the third scene elements obtained by the positive film bottom-folding mixing and the colors of the plurality of second control points to obtain the target third scene elements.
6. The game scene generation method according to claim 5, wherein a target third scene element is obtained, the game scene generation method further comprising:
Determining a moving range of the third scene element in the game scene, determining the color of the moving range according to the first scene element and the second scene element included in the game scene, and generating a moving map;
Acquiring the position of the third scene in the game scene, and determining the color corresponding to the third scene element in the mobile map according to the position;
And overlapping the color feature corresponding to the third scene element in the mobile map to the third scene to obtain the target third scene element.
7. The game scene generation method according to claim 6, wherein overlaying the color feature corresponding to the third scene element in the mobile map to the third scene to obtain the target third scene element includes:
Converting the moving map and the third scene element according to the preset width and height to obtain a target moving map and a converted third scene element;
converting the target movement map into a byte array, determining the position of the converted third scene element in the game scene, and acquiring a color value corresponding to the position of the converted third scene element in the game scene in the byte array;
And overlapping the color value positive film to the converted third scene element to obtain the target third scene element.
8. The game scene generation method according to claim 7, wherein rendering the first scene element, the second scene element, and the target third scene element according to a rendering group and a rendering hierarchy corresponding to the first scene element, the second scene element, and the target third scene element to obtain a target game scene comprises:
determining a rendering group and a rendering level to which the first scene element, the second scene element and the target third scene element belong according to the depth values of the first scene element, the second scene element and the target third scene element in the game scene;
And rendering the first scene element, the second scene element and the target third scene element according to the rendering group and the rendering level to obtain the target scene element.
9. A game scene generation apparatus, comprising:
The second scene element generation module is used for acquiring first scene elements included in the game scene and generating one or more first target scene element maps corresponding to the maps of the first scene elements; determining the direction of a light beam generated when the first scene element is a light source as a first direction, and performing offset processing on the one or more first target scene element maps based on the first direction to obtain a map of a second scene element taking the first scene element as the light source;
The gradient value calculation module is used for obtaining a third scene element included in the game scene and a pixel corresponding to the third scene element and calculating a gradient value of the pixel;
the target third scene element generation module is used for obtaining edges of the third scene elements according to the gradient values and the normalized depth values of the second scene elements, and superposing the edges with the second scene elements to obtain target third scene elements;
and the target game scene generation module is used for rendering the first scene element, the second scene element and the target third scene element according to the rendering group and the rendering level corresponding to the first scene element, the second scene element and the target third scene element so as to obtain a target game scene.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the game scene generation method of any of claims 1-8.
11. An electronic device, comprising:
A processor; and
A memory for storing executable instructions of the processor;
Wherein the processor is configured to perform the game scenario generation method of any one of claims 1-8 via execution of the executable instructions.
CN202110276020.1A 2021-03-15 2021-03-15 Game scene generation method and device, readable storage medium and electronic equipment Active CN112891946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110276020.1A CN112891946B (en) 2021-03-15 2021-03-15 Game scene generation method and device, readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110276020.1A CN112891946B (en) 2021-03-15 2021-03-15 Game scene generation method and device, readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112891946A CN112891946A (en) 2021-06-04
CN112891946B true CN112891946B (en) 2024-05-28

Family

ID=76105056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110276020.1A Active CN112891946B (en) 2021-03-15 2021-03-15 Game scene generation method and device, readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112891946B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470169B (en) * 2021-06-30 2022-04-29 完美世界(北京)软件科技发展有限公司 Game scene generation method and device, computer equipment and readable storage medium
CN114071181B (en) * 2021-11-26 2024-07-26 上海米哈游天命科技有限公司 Image processing method, system, equipment and computer readable storage medium
CN115808974B (en) * 2022-07-29 2023-08-29 深圳职业技术学院 Immersive command center construction method, immersive command center construction system and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002092637A (en) * 2000-09-19 2002-03-29 Namco Ltd Game system and information storage medium
CN107734267A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Image processing method and device
CN109564748A (en) * 2016-03-15 2019-04-02 萨瑟兰德·库克·埃尔伍德 Mix photonic system
WO2020119444A1 (en) * 2018-12-13 2020-06-18 腾讯科技(深圳)有限公司 Game image rendering method and device, terminal, and storage medium
KR102173546B1 (en) * 2020-04-28 2020-11-03 주식회사 라이언게임즈 Apparatus and method of rendering game objects
CN111932682A (en) * 2020-07-24 2020-11-13 国家电网有限公司 Multi-type virtual role generation method for distribution line operation and inspection operation VR simulation system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002092637A (en) * 2000-09-19 2002-03-29 Namco Ltd Game system and information storage medium
CN109564748A (en) * 2016-03-15 2019-04-02 萨瑟兰德·库克·埃尔伍德 Mix photonic system
CN107734267A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Image processing method and device
WO2020119444A1 (en) * 2018-12-13 2020-06-18 腾讯科技(深圳)有限公司 Game image rendering method and device, terminal, and storage medium
KR102173546B1 (en) * 2020-04-28 2020-11-03 주식회사 라이언게임즈 Apparatus and method of rendering game objects
CN111932682A (en) * 2020-07-24 2020-11-13 国家电网有限公司 Multi-type virtual role generation method for distribution line operation and inspection operation VR simulation system

Also Published As

Publication number Publication date
CN112891946A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN112891946B (en) Game scene generation method and device, readable storage medium and electronic equipment
CN110196746B (en) Interactive interface rendering method and device, electronic equipment and storage medium
CN107358643B (en) Image processing method, image processing device, electronic equipment and storage medium
CN113436343B (en) Picture generation method and device for virtual concert hall, medium and electronic equipment
CN107680042B (en) Rendering method, device, engine and storage medium combining texture and convolution network
CN111968216A (en) Volume cloud shadow rendering method and device, electronic equipment and storage medium
CN111583379B (en) Virtual model rendering method and device, storage medium and electronic equipment
CN110533707A (en) Illuminant estimation
CN113487717B (en) Picture processing method and device, computer readable storage medium and electronic equipment
CN112734896B (en) Environment shielding rendering method and device, storage medium and electronic equipment
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
CN115100337A (en) Whole body portrait video relighting method and device based on convolutional neural network
CN115908716A (en) Virtual scene light rendering method and device, storage medium and electronic equipment
CN112190941A (en) Shadow processing method and device
CN116883573A (en) Map building rendering method and system based on WebGL
CN116543094A (en) Model rendering method, device, computer readable storage medium and electronic equipment
CN116612229A (en) Model light shading method, device, electronic equipment and storage medium
CN115526976A (en) Virtual scene rendering method and device, storage medium and electronic equipment
CN115937389A (en) Shadow rendering method, device, storage medium and electronic equipment
CN115970275A (en) Projection processing method and device for virtual object, storage medium and electronic equipment
CN112367399B (en) Filter effect generation method and device, electronic device and storage medium
JP2023508516A (en) Animation generation method, apparatus, electronic device and computer readable storage medium
CN114049425B (en) Illumination simulation method, device, equipment and storage medium in image
CN114288650A (en) Virtual scene image processing method, virtual scene image display device and electronic equipment
CN116957967A (en) 3D interface chromatic aberration solving method, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant