CN111489430B - Game light and shadow data processing method and device and game equipment - Google Patents

Game light and shadow data processing method and device and game equipment Download PDF

Info

Publication number
CN111489430B
CN111489430B CN202010272017.8A CN202010272017A CN111489430B CN 111489430 B CN111489430 B CN 111489430B CN 202010272017 A CN202010272017 A CN 202010272017A CN 111489430 B CN111489430 B CN 111489430B
Authority
CN
China
Prior art keywords
scene
shadow
game
range
target scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010272017.8A
Other languages
Chinese (zh)
Other versions
CN111489430A (en
Inventor
陈艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310856806.XA priority Critical patent/CN117132702A/en
Priority to CN202010272017.8A priority patent/CN111489430B/en
Publication of CN111489430A publication Critical patent/CN111489430A/en
Application granted granted Critical
Publication of CN111489430B publication Critical patent/CN111489430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6646Methods for processing data by generating or executing the game program for rendering three dimensional images for the computation and display of the shadow of an object or character
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The application provides a light and shadow data processing method and device for a game and game equipment, relates to the technical field of games, and solves the technical problem that calculation force and shadow coverage effect in a scene are difficult to balance. The method comprises the following steps: the method and the device for processing the light and shadow data of the game and the game device provided by the embodiment of the application can determine a plurality of target scene ranges; generating a scene shadow map corresponding to the target scene range; and determining the light and shadow data of the game picture of the game scene to be displayed according to the priority of each scene shadow map according to the scene shadow maps of the target scene ranges.

Description

Game light and shadow data processing method and device and game equipment
Technical Field
The present application relates to the field of game technologies, and in particular, to a method and an apparatus for processing light and shadow data of a game, and a game device.
Background
For games, the expressive power of the game scene is important. Because the visual manifestation given by the game scene in the game screen is not the game play itself, but is presented to the player at first. The game scenario may include: virtual space, virtual objects, virtual characters, shadows created by illumination of virtual objects and virtual characters, and the like.
Currently, shadows represented in game frames need to be provided by shadow maps, which are baked out by continuously calculating shadow data in a game scene in real time to display dynamic shadow effects in the game frames. Thus, the real-time baking process of shadow mapping requires more effort.
However, if the shadow coverage in the game scene is reduced to reduce the calculation power, the shadow effect in the screen is affected, and the game experience is poor. Thus, it is currently difficult to balance the effects of shadow coverage in a scene with computational effort.
Disclosure of Invention
The invention aims to provide a game light and shadow data processing method and device and game equipment, which are used for solving the technical problem that calculation force and shadow coverage effect in a scene are difficult to balance.
In a first aspect, an embodiment of the present application provides a method for processing light and shadow data of a game. Applied to a game device, wherein the three-dimensional game scene of the game comprises a virtual object, and the method comprises the following steps:
determining a plurality of target scene ranges;
generating a scene shadow map corresponding to the target scene range, wherein a first target scene range is smaller than a second target scene range, and the accuracy of the first shadow map corresponding to the first target scene range is greater than the accuracy of the second shadow map corresponding to the second target scene range;
And determining the light and shadow data of a game picture of the game scene to be displayed according to the scene shadow maps of the target scene ranges and the priority of each scene shadow map, wherein the precision of the scene shadow maps is positively related to the priority.
In an optional implementation, the step of determining a plurality of target scene ranges includes:
acquiring a first position of a virtual camera in the three-dimensional game scene and a second position of the virtual object in the three-dimensional game scene;
acquiring a first initial scene range based on the second position;
determining the position offset of the virtual camera and the virtual object according to the first position and the second position;
and moving the first initial scene range according to the position offset to obtain the first target scene range.
In an alternative implementation, the step of determining a plurality of target scene ranges includes:
acquiring a first position of a virtual camera in the three-dimensional game scene and a second position of the virtual object in the three-dimensional game scene;
determining a distance between the virtual camera and the virtual object according to the first position and the second position;
Acquiring a first initial scene range based on the second position;
and scaling the first initial scene range based on the distance to obtain the first target scene range.
In an alternative implementation, the step of determining a plurality of target scene ranges includes:
acquiring a first position of a virtual camera in the three-dimensional game scene and a second position of the virtual object in the three-dimensional game scene;
acquiring the field angle of a virtual camera;
acquiring a first initial scene range based on the second position;
and scaling the first initial scene range based on the current camera view field angle of the camera to obtain the first target scene range.
In an optional implementation, the step of generating a scene shadow map corresponding to the target scene range includes:
determining scene shadow maps of a plurality of target scene ranges based on a preset updating frequency and a second position of the virtual object in the three-dimensional game scene; and the preset updating frequency corresponding to the large range in the target scene ranges is low.
In an optional implementation, the step of determining a scene shadow map of a plurality of target scene ranges based on a preset update frequency and a second position of the virtual object in the three-dimensional game scene includes:
Determining a scene shadow map of a first target scene range and/or a second target scene range based on the second location when an update frequency of the first target scene range and/or the second target scene range is satisfied;
when the update frequency of the second target scene range is not satisfied, determining a scene shadow map of the generated second target scene range;
when the preset updating frequency of the first target scene range is not met and the moving range of the virtual object meets the preset moving range or the sunlight direction in the game scene to be displayed meets the preset direction range, determining a scene shadow map of the first target scene range based on the second position;
and when the preset updating frequency of the first target scene range is not met, and when the moving range of the virtual object is not met or the sunlight direction in the game scene to be displayed is not met, determining the generated scene shadow map of the first target scene range.
In an optional implementation, the step of determining, according to the priority of each scene shadow map, the light and shadow data of the game screen of the game scene to be displayed according to the scene shadow maps of the target scene ranges includes:
Generating a screen shadow map according to the priority of each scene shadow map according to the scene shadow map;
light shadow data of a game picture of a game scene to be displayed is determined based on the screen shadow map, so that the game picture of the game scene to be displayed is displayed in a screen of the game device based on the light shadow data.
In a second aspect, an embodiment of the present application provides a method for processing light and shadow data of a game. Applied to a game device, wherein the three-dimensional game scene of the game comprises a virtual object, and the method comprises the following steps:
responding to the operation of the virtual object entering a house, and acquiring a current illumination angle and a plurality of preset illumination angles, wherein each preset illumination angle corresponds to preset indirect light action intensity data;
determining indirect light action intensity data of the current illumination angle according to the angle difference between the current illumination angle and each preset illumination angle and the indirect light action intensity data of each preset illumination angle;
and determining a game picture in the house based on the indirect light action intensity data of the current illumination angle.
In an optional implementation, the number of the preset illumination angles is three, the three preset illumination angles correspond to one preset illumination map, and three channels of the preset illumination map are respectively used for storing indirect light action intensity data corresponding to one preset illumination angle; determining indirect light action intensity data of the current illumination angle according to the angle difference between the current illumination angle and each preset illumination angle and the indirect light action intensity data of each preset illumination angle, wherein the step comprises the following steps:
And determining an illumination map of the current illumination angle according to the angle difference between the current illumination angle and each preset illumination angle and the preset illumination map, wherein the illumination map of the current illumination angle is used for storing indirect light action intensity data of the current illumination angle.
In an alternative implementation, the indirect light action intensity data includes a ratio of the action intensity of ambient light to the action intensity of sunlight.
In an optional implementation, the step of determining the illumination map of the current illumination angle according to the angle difference between the current illumination angle and each preset illumination angle and the preset illumination map includes:
determining weights corresponding to each preset illumination angle according to the angle difference between the current illumination angle and each preset illumination angle;
and carrying out weighted summation on the indirect light action intensity data of the three preset illumination angles included in each mapping pixel point in the preset illumination mapping based on the weight corresponding to each preset illumination angle, so as to obtain the indirect light action intensity data of each mapping pixel point in the illumination mapping of the current illumination angle.
In an optional implementation, the step of determining the game screen in the house based on the indirect light action intensity data of the current illumination angle includes:
Determining an initial color of each screen pixel of a game picture in a house to be displayed;
acquiring target indirect light action intensity data of each screen pixel based on a target map pixel point in the illumination map of the current illumination angle corresponding to each screen pixel point;
multiplying the initial color of each screen pixel by the target indirect light action intensity data of the screen pixel to obtain the target color of each screen pixel.
In an alternative implementation, the step of multiplying the initial color of each of the screen pixels and the target indirect light action intensity data of the screen pixels to obtain the target color of each of the screen pixels includes:
determining an enhancement coefficient;
multiplying the initial color of each screen pixel, the target indirect light action intensity data of the screen pixel and the enhancement coefficient to obtain the target color of each screen pixel.
In an alternative implementation, the step of multiplying the initial color of each of the screen pixels and the target indirect light action intensity data of the screen pixels to obtain the target color of each of the screen pixels includes:
Determining a shadow coefficient based on the shadow data corresponding to the house;
multiplying the initial color of each screen pixel, the target indirect light action intensity data of the screen pixel and the shadow coefficient to obtain the target color of each screen pixel.
In an alternative implementation, a channel of the preset illumination map is used to store the lighting coefficients; a step of determining a game picture in the house based on the indirect light action intensity data of the current illumination angle, comprising:
when the game scene is at night, acquiring the lighting coefficient and the lamplight color parameter;
and determining the game picture in the house based on the indirect light action intensity data of the current illumination angle, the lighting coefficient and the lamplight color parameter.
In a third aspect, an embodiment of the present application provides a light and shadow data processing apparatus for a game. Applied to game equipment, the three-dimensional game scene of the game comprises virtual objects, and the device comprises:
the first determining module is used for determining a plurality of target scene ranges;
the generating module is used for generating a scene shadow map corresponding to the target scene range, wherein the first target scene range is smaller than the second target scene range, and the precision of the first shadow map corresponding to the first target scene range is larger than that of the second shadow map corresponding to the second target scene range;
And the second determining module is used for determining the light and shadow data of the game picture of the game scene to be displayed according to the scene shadow maps of the target scene ranges and the priority of each scene shadow map, wherein the precision of the scene shadow map is positively related to the priority.
In a fourth aspect, embodiments of the present application provide a light and shadow data processing apparatus for a game. Applied to game equipment, the three-dimensional game scene of the game comprises virtual objects, and the device comprises:
the acquisition module is used for responding to the operation of the virtual object entering the house, acquiring a current illumination angle and a plurality of preset illumination angles, wherein each preset illumination angle corresponds to preset indirect light action intensity data;
the first determining module is used for determining indirect light action intensity data of the current illumination angle according to the angle difference between the current illumination angle and each preset illumination angle and the indirect light action intensity data of each preset illumination angle;
and the second determining module is used for determining game pictures in the house based on the indirect light action intensity data of the current illumination angle.
In a fifth aspect, embodiments of the present application further provide a game device, including a memory, and a processor, where the memory stores a computer program that can be executed by the processor, and the processor executes the method according to the first aspect or the second aspect.
In a sixth aspect, embodiments of the present application further provide a computer-readable storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to perform the method of the first or second aspect described above.
The embodiment of the application brings the following beneficial effects:
the method and the device for processing the light and shadow data of the game and the game device provided by the embodiment of the application can determine a plurality of target scene ranges; generating a scene shadow map corresponding to the target scene range; and determining the light and shadow data of the game picture of the game scene to be displayed according to the priority of each scene shadow map according to the scene shadow maps of the target scene ranges. Shadow data in the shadow data are determined through multi-level scene shadow maps, each scene shadow map corresponds to different spatial ranges in a game scene, and the range is small and the precision is high, so that finer shadow effects can be displayed in a nearer visual field range, coarser shadow effects can be displayed in a farther visual field range, and the relationship between the shadow effects and the performance can be balanced better.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for processing light and shadow data of a game according to an embodiment of the present application;
fig. 2 is a schematic diagram of an offset effect based on an offset in a game light and shadow data processing method according to an embodiment of the present application;
fig. 3 is a schematic view of an effect of scaling based on a distance in a method for processing light and shadow data of a game according to an embodiment of the present application;
fig. 4 is a schematic view illustrating an effect of scaling based on a field angle in a method for processing light and shadow data of a game according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating another method for processing video data of a game according to an embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of a house lighting effect;
FIG. 7 shows a schematic diagram of a house backlight effect;
FIG. 8 illustrates a schematic view of the final effect of a house light facing surface integrated into a particular house;
FIG. 9 shows a schematic view of the final effect of the integration of a house backlight into a particular house;
FIG. 10 is a schematic view showing a structure of a light and shadow data processing apparatus for a game;
FIG. 11 is a schematic view showing a structure of a light and shadow data processing apparatus of another game;
fig. 12 is a schematic diagram showing a structure of a game device provided in an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "comprising" and "having" and any variations thereof, as used in the embodiments of the present application, are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Current terminal games are generally static scenes. The basic flow comprises the following steps: a static scene is prefabricated, and then, light and shadow data of the whole scene are baked in advance in an off-line baking mode, wherein the Light and shadow data can indicate Light and shade data generated by a static object under the action of sunlight, and the Light and shadow data can be stored in a series of Light Map. When the game runs, the lighting effect of static objects in the scene is shown by sampling the lighting maps. Meanwhile, for a dynamic object, a Shadow Map (Shadow Map) with a small range can be used for displaying the interaction effect of direct light and the dynamic object in real time. That is, the current general scene Shadow composition is provided by a Light Map and a Shadow Map respectively, a static object predicts a high-quality Shadow effect by baking the Light Map offline, and a dynamic object displays the Shadow effect by a certain range of Shadow maps.
However, since the Light Map is baked offline after fixing the sunlight direction, and is directly stored on the Map after baking and is not changed any more, the shadow is not changed in real time when the sun rises and falls; also, because the Light Map is baked offline, which places have shadows that are fixed, even if an object in the scene is removed, the shadows cast on the ground corresponding to that object are not removed; because the character is not engaged in the field Jing Hongbei, the interaction of the scene with dynamic objects (e.g., the character) is lost, such as the mottled effect of the tree projected onto the character is not achievable; because a series of Light maps are stored, as the scene becomes larger, the Light maps directly influence the size of the bag body, so that the size of the bag body is uncontrollable; although dynamic objects are specially shadow-mapped to show the effects of direct light and the object itself, the scope of the object itself is not very large due to the shadow-mapped resolution, and remote dynamic objects are not shadow-mapped, thus reducing the game experience.
In order to solve the problems, embodiments of the present application provide a method and an apparatus for processing light and shadow data of a game, and a game device, by which the technical problem that it is difficult to balance the calculation force and the shadow effect in the scene can be solved.
Embodiments of the present invention are further described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for processing light and shadow data of a game according to an embodiment of the present application. The method can be applied to game equipment, wherein the three-dimensional game scene of the game comprises virtual objects. As shown in fig. 1, the method includes:
s110, determining a plurality of target scene ranges.
The plurality of target scene ranges may be determined according to actual needs, and the size of each target scene range may be different. For example, the target scene range may be 3, and the cross section of the target scene range may be a square of 10 meters, 50 meters, and 150 meters. The plurality of target scene ranges may include relationships with the included relationships or may be cross relationships. For example, a 150 meter corresponding target scene range may include target scene ranges of 10 meters and 50 meters, and a 50 meter corresponding target scene range may partially overlap with a 10 meter corresponding target scene range.
The location of the target scene range may be determined based on the location of the virtual object. For example, the target scene range may be centered at the location where the virtual object is located.
S120, generating a scene shadow map corresponding to the target scene range.
The plurality of target scene ranges may include a first target scene range and a second target scene range, the first target scene range being smaller than the second target scene range, the first shadow map corresponding to the first target scene range having a greater accuracy than the second shadow map corresponding to the second target scene range.
For example, the resolution of the scene shading map for each target scene range may be the same, e.g., the size of the scene shading map may be 1024 x 1024. Since the same resolution is used to indicate different target scene ranges, respectively, the accuracy of the indication is different, the smaller the target scene range indicated, the greater the accuracy of the indication.
In addition, two of the plurality of target scene ranges that are continuous in range may be referred to as a first target scene range and a second target scene range, respectively.
The pixels in the scene shadow map may be used to store depth information of a position in a target scene range corresponding to the pixels, where the depth information may refer to a distance between a model and a position of a virtual sun, in other words, under illumination, where the scene shadow map is used to record a shielding relationship of an object. In a game scene, the illumination is parallel light; different illumination ranges are equivalent to a plurality of cuboids with different bottom surfaces.
S130, according to the scene shadow maps of the plurality of target scene ranges, determining the light and shadow data of the game picture of the game scene to be displayed according to the priority of each scene shadow map, wherein the precision of the scene shadow maps is positively related to the priority.
When determining the shadow data of the game picture of the game scene to be displayed, sampling is required in the scene shadow maps of a plurality of target scene ranges to obtain shadow data. Since multiple target scene ranges may include the same location, sampling of shadow data for that location may be performed from the most accurate scene shadow map. The priority may be associated with a target scene range, the greater the target scene range, the lower the priority.
According to the embodiment of the invention, the shadow data in the shadow data are determined through the multi-level scene shadow maps, each scene shadow map corresponds to different space ranges in the game scene, and the range is small and the precision is high, so that finer shadow effects can be displayed in a nearer visual field range, and coarser shadow effects can be displayed in a farther visual field range, and the relationship between the shadow effects and the performance can be balanced better.
Three target scene ranges can be set according to the distance between the edge and the virtual object, three scene shadow maps for the three target scene ranges can be obtained, the shadow precision of the scene shadow map corresponding to the target scene range farthest from the virtual object is set to be the lowest, the shadow precision of the scene shadow map corresponding to the target scene range farther from the virtual object is set to be lower, and the shadow precision of the scene shadow map corresponding to the target scene range nearest to the virtual object is set to be the highest.
In some embodiments, the determination of the target scene range may include a variety of ways, such as predefined or adjustable according to game data, as described in detail below in connection with specific examples.
As one example, the initial scene range may be shifted by the amount of positional shift of the virtual camera from the virtual object to obtain the target scene range. Based on this, the above step S110 may be specifically implemented by the following steps:
step a), a first position of a virtual camera in a three-dimensional game scene and a second position of a virtual object in the three-dimensional game scene are obtained;
step b), acquiring a first initial scene range based on the second position;
step c), determining the position offset of the virtual camera and the virtual object according to the first position and the second position;
and d), moving the first initial scene range according to the position offset to obtain a first target scene range.
For the above step d), as shown in fig. 2, the point a is a second position where the virtual object is located in the three-dimensional game scene, the point B is a first position where the virtual camera is located in the three-dimensional game scene, the first initial scene range 211 is acquired based on the second position, and the first target scene range 212 is obtained by moving the first initial scene range 211 according to the position offset between the point a and the point B. The initial scene range is shifted according to the position offset of the virtual camera and the virtual object, so that the obtained target scene range is more accurate.
As another example, the initial scene range may be scaled based on the distance of the virtual camera from the virtual object to obtain the target scene range. Based on this, the above step S110 may be specifically implemented by the following steps:
step e), a first position of the virtual camera in the three-dimensional game scene and a second position of the virtual object in the three-dimensional game scene are obtained;
step f), determining the distance between the virtual camera and the virtual object according to the first position and the second position;
step g), acquiring a first initial scene range based on the second position;
and h), scaling the first initial scene range based on the distance to obtain a first target scene range.
For step h) above, the first initial scene range may be scaled based on the distance between the virtual camera and the virtual object.
For example, as shown in fig. 3, point C is the second position where the virtual object is located in the three-dimensional game scene, and point D is the first position where the virtual camera is located in the three-dimensional game scene. The preset scene range 311 is determined according to the point C and the size of the preset scene range. Based on the corresponding relation between the set distance and the scaling, the scaling factor of the virtual camera is 1 when the virtual camera is located at the point E, that is, scaling is not performed, the virtual camera corresponds to the preset scene range 311 when the virtual camera is located at the point E, and since the distance between the point E and the point C is greater than the distance between the point D and the point C, the scaling factor corresponding to the point D is smaller than 1, that is, scaling is performed, and the initial scene range 312 of the first target scene range is obtained by scaling the preset scene range 311 according to the scaling factor corresponding to the point D.
By scaling the initial scene range based on the distance between the virtual camera and the virtual object, the scaling degree can be made more accurate to reach a more accurate target scene range.
As another example, the initial scene range may also be scaled based on the current camera field angle of the camera to obtain the target scene range. Based on this, the above step S110 may be specifically implemented by the following steps:
step i), a first position of a virtual camera in a three-dimensional game scene and a second position of a virtual object in the three-dimensional game scene are obtained;
step j), obtaining the field angle of the virtual camera;
step k), acquiring a first initial scene range based on the second position;
and step l), scaling the first initial scene range based on the current camera view field angle of the camera to obtain a first target scene range.
For step i above, the first initial scene range may be scaled based on the camera field angle of the camera at the current moment.
For example, as shown in fig. 4, a point F is a second position where a virtual object is located in a three-dimensional game scene, a point G is a first position where a virtual camera is located in the three-dimensional game scene, and a preset scene range 411 is determined according to the point F and the size of the preset scene range. The scaling factor is determined to be 1 when the angle of view of the virtual camera is α based on the correspondence between the angle of view and the scaling factor, and the predetermined scene range 411 is corresponding to the angle of view α, and since the angle of view α is larger than the angle of view β, the scaling factor corresponding to the angle of view β is smaller than 1, and the predetermined scene range 411 is scaled according to the scaling factor corresponding to the angle of view β, thereby obtaining the initial scene range 412 of the first target scene range.
By scaling the first initial scene range based on the camera field angle of the camera at the current time, the scaling degree can be more accurate to achieve a more accurate target scene range.
In addition, the above-described offsets and scaling may also be combined to enable determination of a target scene range, as described in detail below in connection with specific examples.
As one example, the initial scene range may be shifted according to the view direction of the camera to obtain the target scene range. Based on this, the above step S110 may be specifically implemented by the following steps:
step 1.1), a first position of a virtual camera in a three-dimensional game scene and a second position of a virtual object in the three-dimensional game scene are obtained;
step 1.2), acquiring an initial scene range of the first target scene range based on the second position;
step 1.3), determining the position offset of the virtual camera and the virtual object according to the first position and the second position;
step 1.4), the initial scene range of the first target scene range is moved according to the position offset to obtain the first target scene range.
For the step 1.1), the first position and the second position may be positions of the virtual camera and the virtual object at a current time, where the current time may be a time corresponding to a game screen of the game scene to be displayed.
For the step 1.2), the initial scene range may be a preset scene range, or may be obtained by adjusting the preset scene range based on game parameters.
For the above step 1.3), the positional offset may be an offset of the virtual camera and the virtual object in a horizontal direction. Since the virtual object is movable in the three-dimensional game scene, the virtual camera will follow the virtual object. The position between the virtual camera and the virtual object may be relatively unchanged without the user adjusting the field of view during the movement. When the user adjusts the field of view, the offset will also change accordingly.
For the above step 1.4), as shown in fig. 2, the point a is the second position where the virtual object is located, the point B is the first position where the virtual camera is located, the initial scene range 211 is determined according to the size of the point a and the preset initial scene range, and the first target scene range 212 is obtained by moving the initial scene range 211 according to the offset between the point a and the point B and according to the direction of the point B toward the point a. The direction of this point B towards point a may be the direction of the virtual camera towards the virtual object in the horizontal plane.
In the embodiment of the invention, only the first target scene range may be shifted, or both the first target scene range and the second target scene range may be shifted.
In some embodiments, steps 1.1) -1.4) above may be for the smallest one or more of the plurality of target scene ranges. For example, the plurality of target scene ranges may include three of "large", "medium", and "small", where the offset may be implemented for "medium" and "small" by steps 1.1) -1.4) above, and the offset is not performed for "large". Alternatively, for "small" the offset can be achieved by steps 1.1) -1.4) above, with "large" and "medium" not offset.
As another example, the preset scene may be scaled according to how far the virtual camera is from the virtual object to obtain the initial scene range. Based on this, the above step 1.2) can be achieved specifically by the following steps:
step 2.1), determining the distance between the virtual camera and the virtual object according to the first position and the second position;
step 2.2), acquiring a preset scene range of the first target scene range based on the second position;
step 2.3), scaling the preset scene range of the first target scene range based on the distance to obtain an initial scene range of the first target scene range.
For step 2.1 above), the distance between the virtual camera and the virtual object may be the distance between the first location and the second location in the three-dimensional game scene.
For the above step 2.2), the size of the preset scene range of the first target scene range may be preset, and then the position of the preset scene range may be determined according to the second position.
For the above step 2.3), the correspondence between the distance and the scaling may be preset. The scaling may be determined based on the correspondence and the distance of the first location and the second location.
For example, as shown in fig. 3, the point C is the second position where the virtual object is located, the point D is the first position where the virtual camera is located, and the preset scene range 311 is determined according to the point C and the size of the preset scene range. Based on the corresponding relation between the set distance and the scaling ratio, the scaling factor of the virtual camera is 1 when the virtual camera is positioned at the point E, the virtual camera corresponds to the preset scene range 311 when the virtual camera is positioned at the point E, and the scaling factor corresponding to the point D is smaller than 1 because the distance between the point E and the point C is larger than the distance between the point D and the point C, and the initial scene range 312 of the first target scene range is obtained by scaling the preset scene range 311 according to the scaling factor corresponding to the point D.
As another example, the preset scene may be scaled according to the field angle of the virtual camera to obtain the initial scene range. Based on this, the above step 1.2) can be achieved specifically by the following steps:
step 3.1), obtaining the field angle of a virtual camera in the three-dimensional game scene;
step 3.2), acquiring a preset scene range of the first target scene range based on the second position;
step 3.3), scaling the preset scene range of the first target scene range based on the current camera view field angle of the camera to obtain an initial scene range of the first target scene range.
For step 3.1 above), the field angle of the virtual camera may be FOV, which may be default or user-configurable.
For the above step 3.2), the size of the preset scene range of the first target scene range may be preset, and then the position of the preset scene range may be determined according to the second position.
For the above step 3.3), the correspondence relationship between the angle of view and the scaling may be preset. The scaling may be determined based on the correspondence, as well as the field angle of the virtual camera.
For example, as shown in fig. 4, the point F is the second position where the virtual object is located, the point G is the first position where the virtual camera is located, and the preset scene range 411 is determined according to the point F and the size of the preset scene range. The scaling factor is determined to be 1 when the angle of view of the virtual camera is α based on the correspondence between the angle of view and the scaling factor, and the predetermined scene range 411 is corresponding to the angle of view α, and since the angle of view α is larger than the angle of view β, the scaling factor corresponding to the angle of view β is smaller than 1, and the predetermined scene range 411 is scaled according to the scaling factor corresponding to the angle of view β, thereby obtaining the initial scene range 412 of the first target scene range.
In the embodiment of the invention, only the first target scene range can be scaled, or both the first target scene range and the second target scene range can be scaled.
In addition, in some embodiments, the zooming action may be performed after the shifting, where the step 1.4) may specifically include:
step 4.1), moving the initial scene range of the first target scene range according to the position offset to obtain an intermediate scene range of the first target scene range;
step 4.2), determining a scaling factor, and scaling the intermediate scene range of the first target scene range based on the scaling factor to obtain the first target scene range.
For step 4.2 above), the scaling factor may be determined based on the camera field angle or the distance of the virtual camera from the virtual object.
In some embodiments, steps 2.1) -2.3), 3.1) -3.3), or 4.1) -4.2) above may be for a smallest one or more of a plurality of target scene ranges. For example, the plurality of target scene ranges may include three of "large", "medium", and "small", where scaling may be achieved by steps 2.1) -2.3), 3.1) -3.3), or 4.1) -4.2) described above for "medium" and "small", with "large" not scaling; alternatively, scaling may be achieved for "small" by steps 2.1) -2.3), steps 3.1) -3.3), or steps 4.1) -4.2) described above, with "large" and "medium" not scaling.
Wherein the offset and scaling may be for different scene ranges. For example, the plurality of target scene ranges may include three of "large", "medium", and "small", with offsets being made for "medium" and "small", and no offset being made for "large"; also, scaling is performed for "small" and not for "large" and "medium".
According to the embodiment of the invention, the target scene range is determined through operations such as offset or scaling, so that the target scene range can adapt to actual requirements, and the shadow effect of the generated optimized picture is improved.
In some embodiments, the frequency of scene shadow map generation may be controlled by different update frequencies for a target scene range, thereby enabling lazy refreshing of a larger target scene range. Based on this, the above step S120 may be specifically implemented by the following steps:
step 5.1), determining scene shadow maps of a plurality of target scene ranges based on preset updating frequency and a second position of the virtual object in the three-dimensional game scene; the corresponding preset updating frequency with large range in the plurality of target scene ranges is low.
Since the game device needs to generate each frame image at a certain frame rate when rendering the game screen. For example, the frame rate may be 60 frames per second or 100 frames per second, etc. Step S130 may be performed for each generation of the frame image, but it may be determined whether or not the above-described step S110 or S120 needs to be performed according to a preset update frequency or the like. For example, the plurality of target scene ranges may include three "large", "medium", and "small" corresponding to "no update", "slow", and "fast" update frequencies, respectively, where the "large" may correspond to the full scene range, and may not be updated if the scene is unchanged, and if the frame rate is 100 frames per second, "slow" corresponds to 10 seconds, and the "fast" corresponds to 1 second, at this time, the "medium" corresponding scene shading map may be generated once every 1000 frames, and the "small" corresponding scene shading map may be generated once every 100 frames.
In some embodiments, scene shading map generation may be controlled by some specific conditions and update frequency for a target scene range. Based on this, the above step 5.1) can be achieved specifically by the following steps:
step 6.1), determining a scene shadow map of the first target scene range and/or the second target scene range based on the second location when the update frequency of the first target scene range and/or the second target scene range is satisfied;
step 6.2), when the update frequency of the second target scene range is not satisfied, determining a scene shadow map of the generated second target scene range;
step 6.3), when the preset updating frequency of the first target scene range is not met, and when the moving range of the virtual object meets the preset moving range or the sunlight direction in the game scene to be displayed meets the preset direction range, determining a scene shadow map of the first target scene range based on the second position;
step 6.4), when the preset updating frequency of the first target scene range is not met, and when the moving range of the virtual object does not meet the preset moving range or the sunlight direction in the game scene to be displayed does not meet the preset direction range, determining the generated scene shadow map of the first target scene range.
For the above step 6.2), when the update condition of the second target scene range is not satisfied, the light and shadow data corresponding to the current frame game screen may be determined based on the generated scene shadow map of the second target scene range.
For the above step 6.4), when the update condition of the first target scene range is not satisfied, the light and shadow data corresponding to the current frame game screen may be determined based on the generated scene shadow map of the first target scene range.
By adopting hierarchical scene shadow maps and adopting different updating frequencies for different scene shadow maps, the refreshing of a larger target scene range is delayed, and the cost for manufacturing the scene shadow maps can be greatly reduced.
In some embodiments, generation of game visuals may include a variety of implementations. As an example, the above step S130 may be specifically implemented by the following steps:
step 7.1), generating a screen shadow map according to the scene shadow map and the priority of each scene shadow map;
step 7.2) determining light and shadow data of a game picture of the game scene to be displayed based on the screen shadow map so as to display the game picture of the game scene to be displayed in a screen of the game device based on the light and shadow data.
For the above step 7.1), the screen of the game device corresponds to a screen space in the game scene, which also corresponds to a scene range on which the game picture corresponding to this scene range is to be displayed. The scene range of this screen space may be greater or less than the minimum target scene range.
Wherein, to reduce the consumption of each object to capture a scene shadow map at pixel rendering, after each stage of scene shadow map is obtained, a screen space-based screen shadow map (Screen Space Shadow Map) can be generated, which can be sampled directly to determine light shadow data. During the generation of the screen space based shadow map, antialiasing operations may be performed due to the accuracy limitations of the hierarchical shadow map.
In order to reduce jaggies and accelerate computation as much as possible, after the depth of the current pixel point of the scene In the direct light space is obtained, sampling the hierarchical Shadow map of four points around the current pixel point to obtain the Shadow values of the pixel point In a plurality of hierarchical Shadow maps so as to judge whether the current pixel point is In Shadow or not, wherein the Shadow values can be represented by the value of In Shadow. Next, whether the pixel is at the shadow edge is determined by determining whether the absolute value of the shadow value minus 0.5 is less than 0.5, and antialiasing is performed on the pixel at the edge by means of fast approximation antialiasing (FXAA).
Fig. 5 is a flowchart of a method for processing light and shadow data of a game according to an embodiment of the present application. The method can be applied to game equipment, wherein the three-dimensional game scene of the game comprises virtual objects. As shown in fig. 5, the method includes:
s510, responding to the operation of the virtual object entering the house, and acquiring a current illumination angle and a plurality of preset illumination angles, wherein each preset illumination angle corresponds to preset indirect light action intensity data;
the indirect light action intensity data includes a ratio of an action intensity of ambient light to an action intensity of sunlight.
S520, determining indirect light action intensity data of the current illumination angle according to the angle difference between the current illumination angle and each preset illumination angle and the indirect light action intensity data of each preset illumination angle;
s530, determining a game picture in the house based on the indirect light action intensity data of the current illumination angle.
In some embodiments, the number of the preset illumination angles may be three, where the three preset illumination angles correspond to one preset illumination map, and three channels of the preset illumination map are respectively used to store indirect light action intensity data corresponding to one preset illumination angle; the step S520 may be specifically implemented by the following steps:
Step 8.1), determining an illumination map of the current illumination angle according to the angle difference between the current illumination angle and each preset illumination angle and the preset illumination map.
The illumination map of the current illumination angle is used for storing indirect light action intensity data of the current illumination angle.
In some embodiments, the step 8.1) specifically includes:
step 9.1), determining the weight corresponding to each preset illumination angle according to the angle difference between the current illumination angle and each preset illumination angle;
step 9.2), based on the weight corresponding to each preset illumination angle, weighting and summing the indirect light action intensity data of the three preset illumination angles included in each pixel point of the preset illumination map to obtain the indirect light action intensity data of each pixel point of the current illumination angle.
For example, for large objects such as houses, three illumination maps (Light maps) with preset illumination angles can be additionally and independently baked off-line, the three illumination maps with preset illumination angles can be used as inter-house maps (Home maps), the brightness information is stored in three channels of the Home maps, the weight corresponding to the current illumination angle and each preset illumination angle is determined by respectively carrying out difference values on the sunlight directions and the current illumination angles of the three directions, and then the illumination Map with the current illumination angle is obtained, so that the indirect Light effect between houses is improved through the Home maps.
For example, baking a house specifically for a lighting map of three preset lighting angles, the specific baking process includes: three sun are arranged in the roaster centering around the roasting house, such as sun directions (0.7071068,0.5773503,0.4082483), (0,0.5773503, -0.8164966) and (-0.7071068,0.5773503,0.4082483), respectively. And obtaining indirect light action intensity data of the house at different positions of the sun through baking.
Next, at the vertex shading stage of house rendering, three preset illumination angles for baking are transformed into the game scene. Specifically, the weight data of RGB three channels corresponding to a preset illumination angle is calculated and applied to the calculation process of ambient light and sunlight, and finally the indirect illumination effect of the house under the current illumination angle is obtained. For example, FIG. 6 shows a schematic diagram of a house lighting effect; FIG. 7 shows a schematic diagram of a house backlight effect; FIG. 8 illustrates a schematic view of the final effect of a house light facing surface integrated into a particular house; fig. 9 shows a schematic view of the final effect of the integration of the house backlight into a particular house.
The Home Map may be a 512 to 2048 size Map divided by house size, with the corresponding performance consumption within an acceptable range.
In some embodiments, step S530 may be specifically implemented by the following steps:
step 10.1), determining an initial color of each screen pixel of a game picture in a house to be displayed;
step 10.2), acquiring target indirect light action intensity data of each screen pixel based on target map pixel points in the illumination map of the current illumination angle corresponding to each screen pixel;
step 10.3), multiplying the initial color of each screen pixel by the target indirect light action intensity data of the screen pixel to obtain the target color of each screen pixel.
In some embodiments, to enhance the indoor lighting effect, after the player enters the room, the brightness of the indoor scene is integrally enhanced, and the indirect lighting of the room by the ambient light and sunlight is adjusted. The specific process comprises the following steps: the brightness of the whole picture is improved, namely the finally output Color is multiplied by an intensity coefficient Color Scale; and (3) improving indirect illumination of the ambient light and the sunlight, namely multiplying indirect illumination intensity coefficients in the Home Map solving ambient light indirect illumination and the sunlight indirect illumination respectively. As an example, the above step 10.3) may specifically include:
step 11.1), determining enhancement coefficients;
Step 11.2), multiplying the initial color of each screen pixel, the target indirect light action intensity data of the screen pixel and the enhancement coefficient to obtain the target color of each screen pixel.
In some embodiments, for a house in a shadow, the shading caused by the weather change may affect the shadow factor, and whether the house is in the shadow may be determined based on the shadow data corresponding to the house, and if so, the shadow factor is determined based on the current weather. The target color may be adjusted based on the shading coefficient. Wherein the shading coefficient may be used to adjust the brightness. For example, the shading coefficient in the case of cloudy days is larger than that in the case of sunny days. As an example, the above step 11.2) may specifically include:
step 12.1), determining a shadow coefficient based on the shadow data corresponding to the house;
step 12.2), multiplying the initial color of each screen pixel, the target indirect light action intensity data of the screen pixel and the shadow coefficient to obtain the target color of each screen pixel.
The shadow shape changes caused by the change of cloudy shadows on sunny days, and the environment parameters of the shadow change correspondingly on different weather, such as sunny days, rainy days and the like, because the weather change comprises different time periods and different weather effect changes, the shadow changes with tiny brightness after being acted, and the brightness changes can cause the brightness changes in houses.
In some embodiments, in the preset illumination map, an alpha channel other than the red, green and blue channels can be used for storing house lighting coefficients, and by superimposing the same on the self-luminous map of the house, the effect of house lighting at night is shown. For a luminous object, there may be a self-luminous map corresponding to an alpha channel of the self-luminous map for marking a bright light portion. Specifically, for some objects in the scene that will light, such as lamps, the lighting coefficient parameter is stored in the alpha channel of the self-luminous map, so as to control the lighting effect. For the lighting effect of the house at night, in order to simulate the visual effect of lighting of the house at night in the real world, the rest alpha channel of the HomeMap can be reasonably utilized to store the lighting coefficient of the house. As one example, one channel of the preset illumination map is used to store the lighting coefficients; the step S530 may specifically include:
step 13.1), when the game scene is at night, acquiring a lighting coefficient and a lighting color parameter;
step 13.2), determining a game picture in the house based on the indirect light action intensity data of the current illumination angle, the lighting coefficient and the lamplight color parameter.
When the light source is in a dark environment, the lighting coefficient corresponding to the sampled object can be multiplied by the lamplight color parameter to act on the corresponding object pixel. In addition, for some objects in the scene that will light, such as lamps, the lighting coefficient parameter is stored in the alpha channel of the self-illumination map, so as to control the lighting effect. The lighting coefficient of this side refers to a lighting coefficient describing which parts of the object can be lighted, such as a desk lamp, where the light tube emits light and the surrounding light tube gradually decreases. In addition, when the lamp is lighted, one parameter is a lamplight color parameter, the color parameter can be dynamically set through a script, different values can be set according to scene conditions, for example, the lamp is expected to be displayed in a beautiful or powdery state, and the brightness intensity can also be dynamically increased.
In practical applications, the game device in the embodiments of the present application may be any device that runs a game, for example, a server that runs a game, a terminal operated by a player, and the like. When the game device is a server running a game, the server can send the display picture to the terminal after processing the completion picture to obtain the display picture, so that the terminal can display the display picture to a player through the display; when the game device is a player-operated terminal, the player-operated terminal can directly process the picture to obtain and display the display picture through the display.
Fig. 10 provides a schematic structural diagram of a light and shadow data processing apparatus for a game. Applied to game equipment, the three-dimensional game scene of the game comprises virtual objects, and the device comprises:
a first determining module 1001, configured to determine a plurality of target scene ranges;
a generating module 1002, configured to generate a scene shadow map corresponding to the target scene range, where a first target scene range is smaller than a second target scene range, and an accuracy of the first shadow map corresponding to the first target scene range is greater than an accuracy of the second shadow map corresponding to the second target scene range;
a second determining module 1003, configured to determine, according to the scene shadow maps of the multiple target scene ranges and according to the priority of each scene shadow map, light shadow data of a game picture of a game scene to be displayed, where the precision of the scene shadow map is positively related to the priority.
In some embodiments, the first determining module 1001 is specifically configured to:
acquiring a first position of a virtual camera in the three-dimensional game scene and a second position of the virtual object in the three-dimensional game scene;
acquiring a first initial scene range based on the second position;
Determining the position offset of the virtual camera and the virtual object according to the first position and the second position;
and moving the first initial scene range according to the position offset to obtain the first target scene range.
In some embodiments, the first determining module 1001 is specifically configured to:
acquiring a first position of a virtual camera in the three-dimensional game scene and a second position of the virtual object in the three-dimensional game scene;
determining a distance between the virtual camera and the virtual object according to the first position and the second position;
acquiring a first initial scene range based on the second position;
and scaling the first initial scene range based on the distance to obtain the first target scene range.
In some embodiments, the first determining module 1001 is specifically configured to:
acquiring a first position of a virtual camera in the three-dimensional game scene and a second position of the virtual object in the three-dimensional game scene;
acquiring the field angle of a virtual camera;
acquiring a first initial scene range based on the second position;
and scaling the first initial scene range based on the current camera view field angle of the camera to obtain the first target scene range.
In some embodiments, the generating module 1002 is specifically configured to:
determining scene shadow maps of a plurality of target scene ranges based on a preset updating frequency and a second position of the virtual object in the three-dimensional game scene; and the preset updating frequency corresponding to the large range in the target scene ranges is low.
In some embodiments, the generating module 1002 is specifically configured to:
determining a scene shadow map of a first target scene range and/or a second target scene range based on the second location when an update frequency of the first target scene range and/or the second target scene range is satisfied;
when the update frequency of the second target scene range is not satisfied, determining a scene shadow map of the generated second target scene range;
when the preset updating frequency of the first target scene range is not met and the moving range of the virtual object meets the preset moving range or the sunlight direction in the game scene to be displayed meets the preset direction range, determining a scene shadow map of the first target scene range based on the second position;
and when the preset updating frequency of the first target scene range is not met, and when the moving range of the virtual object is not met or the sunlight direction in the game scene to be displayed is not met, determining the generated scene shadow map of the first target scene range.
In some embodiments, the second determining module 1003 is specifically configured to:
generating a screen shadow map according to the priority of each scene shadow map according to the scene shadow map;
light shadow data of a game picture of a game scene to be displayed is determined based on the screen shadow map, so that the game picture of the game scene to be displayed is displayed in a screen of the game device based on the light shadow data.
The light and shadow data processing device for the game provided by the embodiment of the application has the same technical characteristics as the light and shadow data processing method for the game provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
Fig. 11 provides a schematic structural diagram of a light and shadow data processing apparatus for a game. Applied to game equipment, the three-dimensional game scene of the game comprises virtual objects, and the device comprises:
an obtaining module 1101, configured to obtain a current illumination angle and a plurality of preset illumination angles in response to an operation of the virtual object entering the house, where each preset illumination angle corresponds to preset indirect light action intensity data;
a first determining module 1102, configured to determine indirect light action intensity data of the current illumination angle according to an angle difference between the current illumination angle and each preset illumination angle, and indirect light action intensity data of each preset illumination angle;
A second determining module 1103 is configured to determine a game screen in the house based on the indirect light action intensity data of the current illumination angle.
In some embodiments, the number of the preset illumination angles is three, the three preset illumination angles correspond to one preset illumination map, and three channels of the preset illumination map are respectively used for storing indirect light action intensity data corresponding to one preset illumination angle; the first determining module 1102 is specifically configured to:
and determining an illumination map of the current illumination angle according to the angle difference between the current illumination angle and each preset illumination angle and the preset illumination map, wherein the illumination map of the current illumination angle is used for storing indirect light action intensity data of the current illumination angle.
In some embodiments, the indirect light action intensity data comprises a ratio of an action intensity of ambient light to solar light.
In some embodiments, the first determining module 1102 is specifically configured to:
determining weights corresponding to each preset illumination angle according to the angle difference between the current illumination angle and each preset illumination angle;
and carrying out weighted summation on the indirect light action intensity data of the three preset illumination angles included in each mapping pixel point in the preset illumination mapping based on the weight corresponding to each preset illumination angle, so as to obtain the indirect light action intensity data of each mapping pixel point in the illumination mapping of the current illumination angle.
In some embodiments, the second determining module 1103 is specifically configured to:
determining an initial color of each screen pixel of a game picture in a house to be displayed;
acquiring target indirect light action intensity data of each screen pixel based on a target map pixel point in the illumination map of the current illumination angle corresponding to each screen pixel point;
multiplying the initial color of each screen pixel by the target indirect light action intensity data of the screen pixel to obtain the target color of each screen pixel.
In some embodiments, the second determining module 1103 is specifically configured to:
determining an enhancement coefficient;
multiplying the initial color of each screen pixel, the target indirect light action intensity data of the screen pixel and the enhancement coefficient to obtain the target color of each screen pixel.
In some embodiments, the second determining module 1103 is specifically configured to:
determining a shadow coefficient based on the shadow data corresponding to the house;
multiplying the initial color of each screen pixel, the target indirect light action intensity data of the screen pixel and the shadow coefficient to obtain the target color of each screen pixel.
In some embodiments, one channel of the preset illumination map is used to store a lighting coefficient; the second determining module 1103 is specifically configured to:
when the game scene is at night, acquiring the lighting coefficient and the lamplight color parameter;
and determining the game picture in the house based on the indirect light action intensity data of the current illumination angle, the lighting coefficient and the lamplight color parameter.
The light and shadow data processing device for the game provided by the embodiment of the application has the same technical characteristics as the light and shadow data processing method for the game provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
As shown in fig. 12, a game device 1200 provided in an embodiment of the present application includes: a processor 1201, a memory 1202 and a bus, said memory 1202 storing machine readable instructions executable by said processor 1201, said processor 1201 communicating with said memory 1202 over the bus when the gaming device is running, said processor 1201 executing said machine readable instructions to perform the steps of the method of movement control of a virtual character in a game as described above.
In particular, the memory 1202 and the processor 1201 can be general-purpose memories and processors, and are not particularly limited herein, and the light and shadow data processing method of the game described above can be executed when the processor 1201 runs a computer program stored in the memory 1202.
In practice, the processor 1201 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 1201 or by instructions in the form of software. The processor 1201 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. Which is located in a memory 1202 and a processor 1201 reads information in the memory 1202 to perform the steps of the method described above in connection with its hardware.
Corresponding to the above-mentioned light and shadow data processing method of the game, the embodiment of the application further provides a computer readable storage medium storing machine executable instructions, which when invoked and executed by a processor, cause the processor to execute the steps of the above-mentioned light and shadow data processing method of the game shown in fig. 1 or 5.
The light and shadow data processing device of the game provided by the embodiment of the application can be specific hardware on equipment or software or firmware installed on the equipment. The device provided in the embodiments of the present application has the same implementation principle and technical effects as those of the foregoing method embodiments, and for a brief description, reference may be made to corresponding matters in the foregoing method embodiments where the device embodiment section is not mentioned. It will be clear to those skilled in the art that, for convenience and brevity, the specific operation of the system, apparatus and unit described above may refer to the corresponding process in the above method embodiment, which is not described in detail herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments provided in the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the mobile control method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application. Are intended to be encompassed within the scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A light and shadow data processing method for a game, wherein the method is applied to a game device, and a three-dimensional game scene of the game comprises a virtual object, and the method comprises the following steps:
determining a plurality of target scene ranges, wherein the positions of the target scene ranges are determined according to the positions of the virtual objects;
Generating a scene shadow map corresponding to the target scene range, wherein a first target scene range is smaller than a second target scene range, and the accuracy of the first shadow map corresponding to the first target scene range is greater than the accuracy of the second shadow map corresponding to the second target scene range;
determining light and shadow data of a game picture of a game scene to be displayed according to the scene shadow maps of the target scene ranges and the priority of each scene shadow map, wherein the precision of the scene shadow maps is positively related to the priority; when determining the shadow data, sampling is carried out in the scene shadow maps of a plurality of target scene ranges to obtain shadow data, the plurality of target scene ranges have the same positions, and the sampling of the shadow data at the same positions is carried out from the scene shadow map with the greatest precision.
2. The method of claim 1, wherein the step of determining a plurality of target scene ranges comprises:
acquiring a first position of a virtual camera in the three-dimensional game scene and a second position of the virtual object in the three-dimensional game scene;
Acquiring a first initial scene range based on the second position;
determining the position offset of the virtual camera and the virtual object according to the first position and the second position;
and moving the first initial scene range according to the position offset to obtain the first target scene range.
3. The method of claim 1, wherein the step of determining a plurality of target scene ranges comprises:
acquiring a first position of a virtual camera in the three-dimensional game scene and a second position of the virtual object in the three-dimensional game scene;
determining a distance between the virtual camera and the virtual object according to the first position and the second position;
acquiring a first initial scene range based on the second position;
and scaling the first initial scene range based on the distance to obtain the first target scene range.
4. The method of claim 1, wherein the step of determining a plurality of target scene ranges comprises:
acquiring a first position of a virtual camera in the three-dimensional game scene and a second position of the virtual object in the three-dimensional game scene;
Acquiring the field angle of a virtual camera;
acquiring a first initial scene range based on the second position;
and scaling the first initial scene range based on the current camera view field angle of the camera to obtain the first target scene range.
5. The method of claim 1, wherein the step of generating a scene shadow map corresponding to the target scene range comprises:
determining scene shadow maps of a plurality of target scene ranges based on a preset updating frequency and a second position of the virtual object in the three-dimensional game scene; and the preset updating frequency corresponding to the large range in the target scene ranges is low.
6. The method of claim 5, wherein determining a scene shading map for a plurality of target scene ranges based on a preset update frequency and a second position of the virtual object in the three-dimensional game scene comprises:
determining a scene shadow map of a first target scene range and/or a second target scene range based on the second location when an update frequency of the first target scene range and/or the second target scene range is satisfied;
When the update frequency of the second target scene range is not satisfied, determining a scene shadow map of the generated second target scene range;
when the preset updating frequency of the first target scene range is not met and the moving range of the virtual object meets the preset moving range or the sunlight direction in the game scene to be displayed meets the preset direction range, determining a scene shadow map of the first target scene range based on the second position;
and when the preset updating frequency of the first target scene range is not met, and when the moving range of the virtual object is not met or the sunlight direction in the game scene to be displayed is not met, determining the generated scene shadow map of the first target scene range.
7. The method of claim 1, wherein the step of determining, from the scene shadow maps of the plurality of target scene ranges, the light and shadow data of the game screen of the game scene to be displayed according to the priority of each scene shadow map, comprises:
generating a screen shadow map according to the priority of each scene shadow map according to the scene shadow map;
Light shadow data of a game picture of a game scene to be displayed is determined based on the screen shadow map, so that the game picture of the game scene to be displayed is displayed in a screen of the game device based on the light shadow data.
8. A light and shadow data processing apparatus for a game, the apparatus being applied to a game device, the game comprising a virtual object in a three-dimensional game scene, the apparatus comprising:
the first determining module is used for determining a plurality of target scene ranges, and the positions of the target scene ranges are determined according to the positions of the virtual objects;
the generating module is used for generating a scene shadow map corresponding to the target scene range, wherein the first target scene range is smaller than the second target scene range, and the precision of the first shadow map corresponding to the first target scene range is larger than that of the second shadow map corresponding to the second target scene range;
a second determining module, configured to determine, according to the scene shadow maps of the multiple target scene ranges and according to a priority of each scene shadow map, light and shadow data of a game picture of a game scene to be displayed, where accuracy of the scene shadow maps is positively related to the priority; when determining the shadow data, sampling is carried out in the scene shadow maps of a plurality of target scene ranges to obtain shadow data, the plurality of target scene ranges have the same positions, and the sampling of the shadow data at the same positions is carried out from the scene shadow map with the greatest precision.
9. A game device comprising a memory, a processor, the memory having stored therein a computer program executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the method of any of the preceding claims 1 to 7.
10. A computer readable storage medium storing machine executable instructions which, when invoked and executed by a processor, cause the processor to perform the method of any one of claims 1 to 7.
CN202010272017.8A 2020-04-08 2020-04-08 Game light and shadow data processing method and device and game equipment Active CN111489430B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310856806.XA CN117132702A (en) 2020-04-08 2020-04-08 Game light and shadow data processing method and device and game equipment
CN202010272017.8A CN111489430B (en) 2020-04-08 2020-04-08 Game light and shadow data processing method and device and game equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010272017.8A CN111489430B (en) 2020-04-08 2020-04-08 Game light and shadow data processing method and device and game equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310856806.XA Division CN117132702A (en) 2020-04-08 2020-04-08 Game light and shadow data processing method and device and game equipment

Publications (2)

Publication Number Publication Date
CN111489430A CN111489430A (en) 2020-08-04
CN111489430B true CN111489430B (en) 2024-03-01

Family

ID=71811727

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010272017.8A Active CN111489430B (en) 2020-04-08 2020-04-08 Game light and shadow data processing method and device and game equipment
CN202310856806.XA Pending CN117132702A (en) 2020-04-08 2020-04-08 Game light and shadow data processing method and device and game equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310856806.XA Pending CN117132702A (en) 2020-04-08 2020-04-08 Game light and shadow data processing method and device and game equipment

Country Status (1)

Country Link
CN (2) CN111489430B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112587921A (en) * 2020-12-16 2021-04-02 成都完美时空网络技术有限公司 Model processing method and device, electronic equipment and storage medium
CN112370784B (en) * 2021-01-15 2021-04-09 腾讯科技(深圳)有限公司 Virtual scene display method, device, equipment and storage medium
CN113313811A (en) * 2021-06-29 2021-08-27 完美世界(北京)软件科技发展有限公司 Illumination processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103180881A (en) * 2010-12-24 2013-06-26 中国科学院自动化研究所 Fast rendering method of third dimension of complex scenes in internet
CN108579082A (en) * 2018-04-27 2018-09-28 网易(杭州)网络有限公司 The method, apparatus and terminal of shadow are shown in game
CN110585713A (en) * 2019-09-06 2019-12-20 腾讯科技(深圳)有限公司 Method and device for realizing shadow of game scene, electronic equipment and readable medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4833674B2 (en) * 2006-01-26 2011-12-07 株式会社コナミデジタルエンタテインメント GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103180881A (en) * 2010-12-24 2013-06-26 中国科学院自动化研究所 Fast rendering method of third dimension of complex scenes in internet
CN108579082A (en) * 2018-04-27 2018-09-28 网易(杭州)网络有限公司 The method, apparatus and terminal of shadow are shown in game
CN110585713A (en) * 2019-09-06 2019-12-20 腾讯科技(深圳)有限公司 Method and device for realizing shadow of game scene, electronic equipment and readable medium

Also Published As

Publication number Publication date
CN111489430A (en) 2020-08-04
CN117132702A (en) 2023-11-28

Similar Documents

Publication Publication Date Title
CN111489430B (en) Game light and shadow data processing method and device and game equipment
US6983082B2 (en) Reality-based light environment for digital imaging in motion pictures
CN112419472B (en) Augmented reality real-time shadow generation method based on virtual shadow map
CN111260769B (en) Real-time rendering method and device based on dynamic illumination change
US6195099B1 (en) Method for time based shadow rendering
JP2018147718A (en) Illumination control method and illumination control system
CN111408131A (en) Information processing method and device in game, electronic equipment and storage medium
US20230134130A1 (en) Method and apparatus for adapting a scene rendering
CN112233214B (en) Snow scene rendering method, device and equipment for large scene and storage medium
CN104299213A (en) Method for synthesizing high-dynamic image based on detail features of low-dynamic images
US10424108B2 (en) Shadow casting for an elevation data grid
Spasojević et al. Sky luminance mapping for computational daylight modeling
CN110364104B (en) Picture color overflow method, device, equipment and computer readable storage medium for display equipment
CN111145332B (en) General method for designing photometry for home decoration
TWI568270B (en) Image calibration system and its method
CN112037292B (en) Weather system generation method, device and equipment
CN113368496B (en) Weather rendering method and device for game scene and electronic equipment
KR20160006087A (en) Device and method to display object with visual effect
KR100569124B1 (en) Method of displaying real-time background for navigation in a vehicle
CN117333412B (en) Thermal infrared image and visible light image fusion method based on least square fitting
Olajos et al. Sparse Spatial Shading in Augmented Reality
Muñoz‐Pandiella et al. Real‐Time Solar Exposure Simulation in Complex Cities
CN116208754A (en) Screen, control method and device thereof, and storage medium
CN116228935A (en) Global light rendering method, device, equipment and storage medium
Sunar et al. Real-time of daylight sky colour rendering and simulation for virtual environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant