CN117065330A - Fluid special effect processing method, device, computer equipment and storage medium - Google Patents

Fluid special effect processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117065330A
CN117065330A CN202311085472.7A CN202311085472A CN117065330A CN 117065330 A CN117065330 A CN 117065330A CN 202311085472 A CN202311085472 A CN 202311085472A CN 117065330 A CN117065330 A CN 117065330A
Authority
CN
China
Prior art keywords
target
special effect
height
scene
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311085472.7A
Other languages
Chinese (zh)
Inventor
戴镇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311085472.7A priority Critical patent/CN117065330A/en
Publication of CN117065330A publication Critical patent/CN117065330A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/663Methods for processing data by generating or executing the game program for rendering three dimensional images for simulating liquid objects, e.g. water, gas, fog, snow, clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the application discloses a fluid special effect processing method, a device, computer equipment and a computer readable storage medium, which are used for acquiring scene data in a rendering buffer zone and mapping coordinates of screen pixels into a three-dimensional space according to lens information of a virtual lens in the scene data and depth information corresponding to the screen pixels; calculating a target horizontal observation distance between the screen pixel and the virtual lens according to the three-dimensional coordinates of the screen pixel in the three-dimensional space; determining the target height of the screen pixel in the three-dimensional space according to the three-dimensional coordinates; determining target display parameters of the fluid special effects under the target horizontal observation distance and the target height based on a preset relation between the display parameters of the fluid special effects and the horizontal observation distance and the height; rendering is carried out based on the target display parameters corresponding to the screen pixels and scene data, and a virtual scene with a fluid special effect and a depth sense is obtained. The embodiment of the application realizes better scene depth effect presented in the virtual scene through the fluid special effect.

Description

Fluid special effect processing method, device, computer equipment and storage medium
Technical Field
The application relates to the technical field of communication, in particular to a fluid special effect processing method, a device, computer equipment and a storage medium, wherein the storage medium is a computer readable storage medium.
Background
The addition of the fluid special effect in the virtual scene can enable the display effect of the virtual scene to be better, for example, high-level fog can be added in the virtual scene, and the high-level fog adjusts the concentration based on the linear distance between the screen pixels and the virtual lens so as to distinguish the distant view and the close view in the virtual scene, so that the virtual scene has depth sense. When the height of the virtual lens is higher, the linear distances between different screen pixels and the virtual lens are similar, so that the depth effect in the virtual scene obtained by rendering is poor.
Disclosure of Invention
The embodiment of the application provides a fluid special effect processing method, a device, computer equipment and a storage medium, which can realize better scene depth effect of a fluid special effect in a virtual scene.
The fluid special effect treatment method provided by the embodiment of the application comprises the following steps:
acquiring scene data in a rendering buffer zone, and mapping coordinates of screen pixels into a three-dimensional space according to lens information of a virtual lens in the scene data and depth information corresponding to the screen pixels;
Calculating a target horizontal observation distance between the screen pixel and the virtual lens according to the three-dimensional coordinates of the screen pixel in the three-dimensional space;
determining the target height of the screen pixel in the three-dimensional space according to the three-dimensional coordinates;
determining a target display parameter of the fluid special effect under the target horizontal observation distance and the target height based on a preset relation between the display parameter of the fluid special effect and the horizontal observation distance and the height;
rendering is carried out based on the target display parameters and scene data corresponding to the screen pixels, and a virtual scene with the fluid special effect and the depth feeling is obtained.
Correspondingly, the embodiment of the application also provides a fluid special effect processing device, which comprises:
the device comprises an acquisition unit, a rendering buffer area and a display unit, wherein the acquisition unit is used for acquiring scene data in the rendering buffer area and mapping coordinates of screen pixels into a three-dimensional space according to lens information of a virtual lens in the scene data and depth information corresponding to the screen pixels;
a distance calculation unit for calculating a target horizontal observation distance between the screen pixel and the virtual lens according to a three-dimensional coordinate of the screen pixel in the three-dimensional space;
A height determining unit for determining a target height of the screen pixel in the three-dimensional space according to the three-dimensional coordinates;
a parameter determining unit configured to determine a target display parameter of a fluid special effect at the target horizontal observation distance and the target height based on a preset relationship between the display parameter of the fluid special effect and the horizontal observation distance and the height;
and the rendering unit is used for rendering based on the target display parameters and scene data corresponding to the screen pixels to obtain the virtual scene with the fluid special effect and the depth sense.
Correspondingly, the embodiment of the application also provides computer equipment, which comprises a memory and a processor; the memory stores a computer program, and the processor is configured to run the computer program in the memory, so as to execute any fluid special effect processing method provided by the embodiment of the application.
Accordingly, embodiments of the present application also provide a computer readable storage medium for storing a computer program loaded by a processor to perform any of the fluid special effect processing methods provided by the embodiments of the present application.
According to the embodiment of the application, the scene data in the rendering buffer area is obtained, and the coordinates of the screen pixels are mapped into a three-dimensional space according to the lens information of the virtual lens in the scene data and the depth information corresponding to the screen pixels; calculating a target horizontal observation distance between the screen pixel and the virtual lens according to the three-dimensional coordinates of the screen pixel in the three-dimensional space; determining the target height of the screen pixel in the three-dimensional space according to the three-dimensional coordinates; determining target display parameters of the fluid special effects under the target horizontal observation distance and the target height based on a preset relation between the display parameters of the fluid special effects and the horizontal observation distance and the height; rendering is carried out based on the target display parameters corresponding to the screen pixels and scene data, and a virtual scene with a fluid special effect and a depth sense is obtained.
According to the embodiment of the application, the screen pixels are mapped into the three-dimensional space to obtain the target height of the screen pixels in the three-dimensional space and the target horizontal observation distance between the screen pixels and the virtual lens, so that the difference of the screen pixels in the distance can be increased, the difference of display parameters of each screen pixel about the fluid special effect obtained by adjusting the target horizontal observation distance and the target height is also increased, and the scene depth effect presented by the fluid special effect in the virtual scene is better.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a fluid special effect processing method provided by an embodiment of the present application;
fig. 2 is a schematic view of a virtual scene provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a specific effect of the foggy provided by an embodiment of the application;
FIG. 4 is a schematic diagram of a fluid special effect processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides a fluid special effect processing method, a device, computer equipment and a computer readable storage medium. The fluid special effect processing device can be integrated in computer equipment, and the computer equipment can be a server, a terminal and other equipment.
The terminal may include a mobile phone, a wearable intelligent device, a tablet computer, a notebook computer, a personal computer (PC, personal Computer), a car-mounted computer, and the like.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
The present embodiment will be described from the perspective of a fluid special effect processing apparatus, which may be specifically integrated in a computer device, which may be a server or a terminal, or other devices.
First, a virtual scene according to an embodiment of the present application will be described.
A virtual scene is a screen that an application program displays (or provides) when running on a terminal or a server. Optionally, the virtual scene is a simulation environment for the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment. The virtual environment may be sky, land, sea, etc., wherein the land includes environmental elements such as deserts, cities, etc.
In an embodiment, the virtual scene may be a game scene, where the game scene is a scene of a complete game logic of a virtual object such as user control, for example, in a sandbox shooting game, the virtual scene is a game world for a player to control the virtual object to play, and an exemplary virtual scene may include: at least one element selected from mountains, flat lands, rivers, lakes, oceans, deserts, sky, plants, buildings and vehicles; for example, in a card game, the virtual scene is a scene for showing a released card or a virtual object corresponding to the released card, and an exemplary virtual scene may include: arenas, battle fields, or other "field" elements or other elements that can display the status of card play; for a multiplayer online tactical athletic game, the virtual scene is a terrain scene for the virtual object to fight, an exemplary virtual scene may include: mountain, line, river, classroom, table and chair, podium, etc.
The fluid special effect is added in the virtual scene, is a screen special effect, can enable the virtual scene to be more vivid or rich, can create depth of field feel, and can comprise fog special effect, smoke special effect and the like.
The specific flow of the specific fluid effect processing method provided by the embodiment of the application can be as follows, as shown in fig. 1:
101. and acquiring scene data in the rendering buffer zone, and mapping coordinates of screen pixels into a three-dimensional space according to lens information of virtual lenses in the scene data and depth information corresponding to the screen pixels.
The rendering buffer may be a buffer for buffering scene data required for rendering the virtual scene, and in the embodiment of the present application, a scene with no fluid effect added may be rendered based on the scene data, and the virtual scene in step 105 is a scene with a fluid effect added.
A scene image may be obtained by rendering, which may also be referred to as a screen rendering, and displaying the scene image by a display device may present a virtual scene, so that a color to be displayed by each pixel on the display device depends on a corresponding pixel in the scene image.
The virtual lens can be regarded as an observation view angle in the three-dimensional scene, determines a part of the rendered scene in the three-dimensional scene, and renders the part of the scene to obtain the virtual scene in the embodiment of the application.
The purpose of mapping the screen pixels to the three-dimensional space is to decompose the distance between the screen pixels and the virtual camera into horizontal distances and increase the difference between the distances between different screen pixels and the virtual lens, so the three-dimensional space may be a space corresponding to the three-dimensional scene where the virtual scene is observed or may be another three-dimensional space.
The lens information of the virtual lens may include a conversion matrix which may convert a conversion relationship between a two-dimensional coordinate system corresponding to the scene image and a three-dimensional coordinate, so that the two-dimensional coordinate of the screen pixel may be converted into a three-dimensional coordinate in a three-dimensional space by the virtual lens information and the depth information of the screen pixel, and a coordinate component with respect to a coordinate axis perpendicular to a horizontal plane in the three-dimensional coordinate may represent a height of the screen pixel point in the three-dimensional space.
102. And calculating the target horizontal observation distance between the screen pixel and the virtual lens according to the three-dimensional coordinates of the screen pixel in the three-dimensional space.
For example, the three-dimensional coordinates are represented by (x, y, z), where x and y are coordinate components (hereinafter referred to as first coordinate components) of the three-dimensional coordinates projected onto a horizontal plane, and y is a coordinate component (hereinafter referred to as second coordinate components) projected onto a coordinate axis perpendicular to the horizontal plane.
The lens information of the virtual lens may further include three-dimensional coordinates of the virtual lens in the three-dimensional space, and the target horizontal observation distance between the screen pixel and the virtual lens may be a distance between projection of the screen pixel on a horizontal plane of the three-dimensional space and projection of the virtual lens on the horizontal plane, so the target horizontal observation distance may be calculated from the first coordinate component.
Assume that the three-dimensional coordinates of the virtual lens are (x 1 ,y 1 ,z 1 ) Three-dimensional coordinates (x 2 ,y 2 ,z 2 ) The target horizontal observation distance is
103. The target height of the screen pixels in three-dimensional space is determined from the three-dimensional coordinates.
For example, the height of the screen pixel in the three-dimensional space, i.e., the target height, may be determined from the second coordinate component of the screen pixel, assuming that the three-dimensional coordinates of the virtual lens are (x 1 ,y 1 ,z 1 ) Three-dimensional coordinates (x 2 ,y 2 ,z 2 ) Target height h=z 2
104. And determining target display parameters of the fluid special effects under the target horizontal observation distance and the target height based on a preset relation between the display parameters of the fluid special effects and the horizontal observation distance and the height.
The fluid effect may include a liquid effect and a gas effect, for example, a fog effect, a smoke effect, or a cloud effect, and the display parameter may include a parameter capable of adjusting a display effect of the fluid effect in the virtual scene, for example, the display parameter may include a concentration, a color, and the like.
The fluid special effect can be a fog special effect, the display parameters corresponding to the fog special effect can comprise concentration, color, density and the like, and through setting the density, the concentration and the color, the states of fog at different positions in the virtual scene can be different, so that the fog at the near view, the middle view and the far view in the virtual scene can be different, and the depth of field sense is created, namely in an embodiment, the fluid special effect is the fog special effect, and the display parameters comprise at least one of the fog concentration, the fog color and the fog density.
In the exponential high fog special effect, the farther the distance is, the more dense the fog is, which can cause some distant objects to be completely submerged in the fog, the user cannot observe the layout of the virtual scene, and the exponential high fog special effect is to determine the fog concentration according to the linear distance between the screen pixel point and the virtual lens, in some visual angles, for example, when looking at a high place, the difference of the linear distance between the screen pixel and the virtual lens in the virtual scene is smaller due to the higher virtual lens, and the scene depth effect of the fog special effect rendered based on the linear distance is poor.
In the embodiment of the application, the display parameters are determined according to the horizontal observation distance and the height, so that the difference between the display parameters of the screen pixels in the virtual scene about the special effect of the fluid can be increased.
The preset relation between the display parameter of the fluid special effect and the horizontal observation distance and the height can comprise a mapping relation between the combination of the horizontal observation distance and the height and the display parameter, for example, the horizontal observation distance a-height b corresponds to the display parameter c, and further, the target display parameter corresponding to the screen pixel can be determined according to the mapping relation, the target horizontal observation distance and the target height.
Optionally, the preset relationship between the display parameter of the fluid special effect and the horizontal observation distance and the height may be a relationship between the horizontal observation distance and the display parameter, and a relationship between the height and the display parameter, the relationship between the horizontal observation distance and the display parameter may be a proportional relationship between the horizontal observation distance and the display parameter, or may be other relationships that enable the virtual scene to present a larger horizontal observation distance, and the display effect of the fluid special effect is more obvious, while the relationship between the height and the display parameter may enable the screen pixels with the same horizontal observation distance and different heights in the virtual scene, and the display effects of the fluid special effect are different.
For example, the minimum height and the maximum height in the screen pixels with the same horizontal observation distance can be determined, the height drop under the same horizontal observation distance is obtained based on the difference between the maximum height and the minimum height, then the duty ratio of the difference between the maximum height and the height of each screen pixel to the height drop under the same horizontal observation distance is calculated, the duty ratio is used as a weight, and the display parameters are adjusted.
Optionally, the display parameters may be adjusted according to a scene layout in the virtual scene, for example, for determining a terrain distribution in the three-dimensional scene according to a terrain map of the three-dimensional scene corresponding to the virtual scene, the concentrations of the screen pixels belonging to the terrain may be adjusted according to the features of different terrains, so that the virtual scene may exhibit a staggered effect in the fluid special effect, that is, in an embodiment, the step of determining, based on a preset relationship between the display parameters of the fluid special effect and the horizontal observation distance and the height, the target display parameters of the fluid special effect under the target horizontal observation distance and the target height includes:
Acquiring scene layout information corresponding to a virtual scene, wherein the scene layout information comprises distribution positions of scene objects forming the virtual scene in a three-dimensional space and corresponding special effect heights;
determining the height of the target special effect corresponding to each screen pixel according to the distribution position and the three-dimensional coordinates of each screen pixel;
determining the distance display parameter of the fluid special effect for each screen pixel according to the corresponding relation between the display parameter and the horizontal observation distance in the preset relation and the target horizontal observation distance of each screen pixel;
and adjusting the distance display parameters according to the height relation between the target special effect height and the target height corresponding to each screen pixel to obtain the target display parameters of each screen pixel.
The scene layout information may indicate distribution conditions of different scene objects in the three-dimensional scene corresponding to the virtual scene and corresponding special effect heights, that is, the scene layout information may include distribution positions of the scene objects in the three-dimensional space in the virtual scene and corresponding special effect heights.
The scene object may be an object constituting a three-dimensional scene, such as an urban, a tall building, a mountain, a plateau, a plain, etc., and the special effect height may be an effective height at which the fluid special effect acts on the scene object, i.e., no fluid special effect is provided for screen pixels exceeding the effective height of the scene object.
In an embodiment, the scene layout information may reflect a distribution situation of a three-dimensional scene corresponding to the virtual scene on a horizontal plane, for example, the scene layout information includes a correspondence between coordinates and scene objects, and then, the scene object corresponding to a point in the three-dimensional scene may be determined according to the coordinates obtained after the point is projected to the horizontal plane.
The scene layout information may be generated when constructing a three-dimensional scene, and may be in different data forms, for example, may be a scene layout map.
In an embodiment, the pixel values of the pixels in the scene layout are used to represent the corresponding scene object, and the corresponding image pixels can be found in the scene layout according to the first coordinate component of the screen pixels, and the correspondence between the screen pixels and the image pixels of the scene layout can be one-to-one or many-to-one. According to the pixel value of the image pixel, determining a scene object to which the screen pixel belongs in the three-dimensional scene, and taking the special effect height corresponding to the scene object as the target special effect height corresponding to the screen pixel.
Optionally, a relationship between the corresponding special effect height and the target horizontal observation distance may be set for each scene object, and the relationship between the special effect height and the target horizontal observation distance may be a pow curve, specifically may be y=x p Wherein x is a horizontal observation distance, y is a special effect height, and p is a parameter for controlling the shape of a curve.
In another embodiment, the pixel values of the pixels in the scene layout are used to represent the special effect heights, for example, the special effect heights of different scene objects are different, the distribution condition of the scene objects corresponding to the different special effect heights in the three-dimensional space can be determined through the pixel values of the pixels in the scene layout, the corresponding image pixels of the screen pixels in the scene layout can be determined according to the first coordinate components of the screen pixels and the scene layout, and the special effect heights indicated by the pixel values of the image pixels are used as the target special effect heights corresponding to the screen pixels.
The special effect heights corresponding to different scenes can be adjusted according to the horizontal observation distances, so that in the virtual scene, for the same scene object, the part exposed by the scene object with the shorter horizontal observation distance is less, and the part of the scene object line with the longer horizontal observation distance is less, namely in an embodiment, the step of determining the target special effect height corresponding to each screen pixel according to the distribution position and the three-dimensional coordinate of each screen pixel can specifically include:
determining a scene object where each screen pixel is located according to the three-dimensional coordinates and the distribution position of each screen pixel;
Obtaining candidate special effect heights of each screen pixel according to the special effect height of the scene object where each screen pixel is located;
and adjusting the corresponding candidate special effect height according to the target horizontal observation distance of each screen pixel to obtain the corresponding target special effect height of each screen pixel.
For example, according to the scene layout information and the three-dimensional coordinates of the screen pixels, the scene object where the screen pixels are mapped to the three-dimensional space can be determined, the special effect height of the scene object is used as a candidate special effect height, the horizontal observation distance of the screen pixels is used as a weight factor, and the candidate special effect height is adjusted to obtain a target special effect height, so that for the scene object with the same scene object, the closer the horizontal observation distance is, the target special effect height is lower, and the more the scene object with the closer horizontal observation distance is, the more parts are not blocked by the fluid special effect.
And determining the corresponding distance display parameter under the target horizontal observation distance according to the relation between the horizontal observation distance and the display parameter in the preset relation.
If the target height is smaller than the special effect height, the target height can be used as the weight for adjusting the distance display parameter, and the larger the target height is, the smaller the corresponding weight is, so that the display effect of the fluid special effect is gradually weakened along with the increase of the height; if the target height is larger than the special effect height, the distance display parameter is adjusted to a value which enables the fluid special effect to have no corresponding display effect, the target display parameter is obtained, the gradual change effect of the fluid special effect on the height can be achieved, and the top of the scene object is not shielded.
Optionally, the height difference may be used as a weight for adjusting the distance display parameter, where the greater the height difference, the greater the corresponding weight, and the more obvious the display effect of the fluid special effect is at the position with lower bit effect height, that is, in an embodiment, the step of adjusting the distance display parameter according to the height relationship between the target special effect height and the target height corresponding to each screen pixel to obtain the target display parameter of each screen pixel may specifically include:
calculating the height difference between the target special effect height and the target height of each screen pixel;
and adjusting the distance display parameters according to the ratio of the height difference of each pixel to the target special effect height to obtain target display parameters.
For example, the target height is subtracted from the special effect height to obtain a height difference between the special effect height and the target height, and then the distance display parameter is adjusted based on the height difference, so that the display effect is more obvious when the fluid special effect is lower in height. And when the height difference is smaller than zero, namely the target height is higher than the special effect height, the distance display parameter is adjusted to a value which enables the fluid special effect to have no corresponding display effect, so that the target display parameter is obtained, the top of the scene object is not blocked by the fluid special effect, the scene object can be displayed in the virtual scene, the fluctuation of the scene object is observed in the virtual scene, and the near view and the distant view in the virtual scene can be distinguished.
In an embodiment, the fluid special effect is a fog special effect, and the display parameters may include fog concentration, fog density, and fog color, for example, the fog display effects at different positions in the virtual scene may be different by adjusting the fog concentrations corresponding to different screen pixels; the method can also be that the fog density is fixed, the fog density is adjusted, so that the fog display effects of different positions in the virtual scene are different, and in addition, the far and near views can be distinguished through color adjustment.
In an embodiment, the fog concentration and the fog color can be adjusted according to the horizontal observation distance and the horizontal observation height of the screen pixels, and the fog concentration can be adjusted, so that the fog concentrations at different positions in the virtual scene are different, and the colors are different. Optionally, the fog concentration can be adjusted according to the horizontal observation distance and the height of the screen pixels, and then the corresponding fog colors in different horizontal observation distance ranges and the corresponding fog colors in different height ranges are divided to adjust the fog concentration and the fog colors.
105. Rendering is carried out based on the target display parameters corresponding to the screen pixels and scene data, and a virtual scene with a fluid special effect and a depth sense is obtained.
For example, the virtual scene added with the fluid special effect can be rendered according to the target display parameters and the scene data corresponding to the screen pixels, and since the step 104 adjusts the display parameters of the fluid special effect according to the horizontal observation distance and the height, the depth of field change in the virtual scene can be distinguished and the height fluctuation of the scene object in the virtual scene can be observed through the display effect of the fluid special effect, so that the depth sense created by the virtual scene can be represented as shown in fig. 2.
The virtual scene can be further provided with an illumination effect, illumination parameters of the fluid special effects of each screen pixel can be obtained through calculation according to an illumination model, such as a BRDF reflection model, the illumination parameters can represent light receiving positions of the fluid special effects in the virtual scene, shadow positions of the fluid special effects can be calculated, and the illumination parameters can comprise colors corresponding to illumination.
According to the target display parameters of the light receiving positions, the illumination parameters are respectively adjusted, for example, the target display parameters and the illumination parameters are multiplied to realize the fluid special effect, so that the light receiving effect can be shown, if the target display parameters indicate that the corresponding positions do not have the fluid special effect, the illumination parameters can be adjusted to the parameters indicating that the illumination effect does not exist through the target display parameters, the illumination is realized to act on the fluid special effect, and similarly, the fluid special effect can have the shadow effect.
The embodiment of the application can render the virtual scene in a delayed rendering mode and the like, and if the delayed rendering mode is adopted, the calculation results (including data such as illumination parameters and the like) of the light receiving and shadow receiving (shadow receiving) of the virtual scene can be obtained from the buffer zone corresponding to the delayed rendering mode.
Optionally, the near-field fluid special effect may be eliminated according to the depth information, that is, in an embodiment, the step of rendering based on the target display parameter and the scene data corresponding to the screen pixel to obtain a virtual scene with the fluid special effect and the depth feel may specifically include:
selecting a target screen pixel from the screen pixels according to the depth information, wherein the target screen pixel corresponds to a near-field region of the virtual scene;
adjusting target display parameters of target screen pixels to eliminate the display effect of the fluid special effect in a close-range area;
rendering is carried out according to the adjusted target display parameters and scene data, and a virtual scene is obtained.
For example, the depth information indicates a straight line distance between a screen pixel and a virtual lens in a three-dimensional space, a near-view region in the virtual scene may be determined according to the depth information, for example, a depth threshold may be set, and a screen pixel within a range smaller than the depth threshold among screen pixels for displaying the virtual scene is taken as a target screen pixel, which is applied to the near-view region in the virtual scene.
And adjusting the target display parameters of the target screen pixels again to eliminate the fluid special effects of the near-field region in the virtual scene.
Rendering according to the adjusted target display parameters of the target screen pixels, the display parameters of other screen pixels and scene data to obtain a virtual scene.
The virtual scene may be a game scene for a player to play a game, in some games, the visible range of the player is blocked by the foggy, for example, as shown in fig. 3, the foggy is tiled in the game scene, and part of the scene in the game map is blocked to show the player that the current operation is equivalent, but the foggy is irrelevant to the topography, and the fog presenting effect is poor.
Obtaining a target special effect mask map for shielding a visible range of a game player in a virtual scene;
adjusting the target display parameters according to the pixel values of the pixels in the target special effect mask image;
Rendering is carried out based on the adjusted target display parameters and scene data, and a virtual scene with a fluid special effect and a depth sense is obtained.
The target special effect mask diagram is used for shielding the visible range of the player in the virtual scene, so that only part of scenes relevant to the player are displayed in the virtual scene.
For example, a target special effect mask map may be specifically obtained, and a pixel value of a pixel in the target special effect mask map may indicate whether a corresponding screen pixel needs to be added with a fluid special effect.
And adjusting target display parameters of screen pixels according to pixel values in the target mask image so as to add the fluid special effect at the position, indicated by the target special effect mask image, where the fluid special effect needs to be added, and increase the area, which is shielded by the fluid special effect, in the virtual scene.
Rendering is carried out according to the adjusted target display parameters and scene data, so that a virtual scene can be obtained, in the virtual scene, the farther the fluid special effect is presented, the more obvious the display effect is, the lower the height is, the more obvious the display effect is, and the region irrelevant to a player in the virtual scene is shielded.
As can be seen from the above, in the embodiment of the present application, the coordinates of the screen pixels are mapped into the three-dimensional space by acquiring the scene data in the rendering buffer, and according to the lens information of the virtual lens in the scene data and the depth information corresponding to the screen pixels; calculating a target horizontal observation distance between the screen pixel and the virtual lens according to the three-dimensional coordinates of the screen pixel in the three-dimensional space; determining the target height of the screen pixel in the three-dimensional space according to the three-dimensional coordinates; determining target display parameters of the fluid special effects under the target horizontal observation distance and the target height based on a preset relation between the display parameters of the fluid special effects and the horizontal observation distance and the height; rendering is carried out based on the target display parameters corresponding to the screen pixels and scene data, and a virtual scene with a fluid special effect and a depth sense is obtained.
According to the embodiment of the application, the screen pixels are mapped into the three-dimensional space to obtain the target height of the screen pixels in the three-dimensional space and the target horizontal observation distance between the screen pixels and the virtual lens, so that the difference of the screen pixels in the distance can be increased, the difference of display parameters of each screen pixel about the fluid special effect obtained by adjusting the target horizontal observation distance and the target height is also increased, and the scene depth effect presented by the fluid special effect in the virtual scene is better.
In order to facilitate better implementation of the fluid special effect processing method provided by the embodiment of the application, in an embodiment, a fluid special effect processing device is also provided. Where the meaning of the terms is the same as in the fluid special effect processing method described above, specific implementation details may be referred to in the description of the method embodiments.
The fluid special effect processing apparatus may be integrated in a computer device, as shown in fig. 4, and the fluid special effect processing apparatus may include: an acquisition unit 301, a distance calculation unit 302, a height determination unit 303, a parameter determination unit 304, and a rendering unit 305 are specifically as follows:
(1) The acquisition unit 301: the method comprises the steps of acquiring scene data in a rendering buffer zone, and mapping coordinates of screen pixels into a three-dimensional space according to lens information of virtual lenses in the scene data and depth information corresponding to the screen pixels.
(2) Distance calculation unit 302: for calculating a target horizontal viewing distance between the screen pixel and the virtual lens from three-dimensional coordinates of the screen pixel in three-dimensional space.
(3) The height determination unit 303: for determining a target height of a screen pixel in three-dimensional space from the three-dimensional coordinates.
(4) Parameter determination unit 304: the method is used for determining target display parameters of the fluid special effects under the target horizontal observation distance and the target height based on a preset relation between the display parameters of the fluid special effects and the horizontal observation distance and the height.
In an embodiment, the parameter determining unit 304 includes:
an information acquisition subunit: the method comprises the steps of acquiring scene layout information corresponding to a virtual scene, wherein the scene layout information comprises distribution positions of scene objects forming the virtual scene in a three-dimensional space and corresponding special effect heights;
a special effect height determination subunit: the method comprises the steps of determining the height of a target special effect corresponding to each screen pixel according to the distribution position and the three-dimensional coordinates of each screen pixel;
a horizontal parameter determination subunit: the distance display parameter of the fluid special effect for each screen pixel is determined according to the corresponding relation between the display parameter and the horizontal observation distance in the preset relation and the target horizontal observation distance of each screen pixel;
A regulating subunit: and the distance display parameters are adjusted according to the height relation between the target special effect height and the target height corresponding to each screen pixel, so that the target display parameters of each screen pixel are obtained.
In one embodiment, the special effects height determination subunit comprises:
an object determination module: the method comprises the steps of determining a scene object where each screen pixel is located according to the three-dimensional coordinates and the distribution position of each screen pixel;
a candidate height determination module: the candidate special effect height of each screen pixel is obtained according to the special effect height of the scene object where each screen pixel is located;
and the height adjusting module is used for: and the method is used for adjusting the corresponding candidate special effect height according to the target horizontal observation distance of each screen pixel to obtain the corresponding target special effect height of each screen pixel.
In an embodiment, the conditioning subunit comprises:
and (3) calculating a height difference: for calculating a height difference between the target special effect height and the target height for each screen pixel;
parameter adjustment: and the distance display parameters are adjusted according to the ratio of the height difference of each pixel to the target special effect height, so as to obtain the target display parameters.
(5) The rendering unit 305: the method is used for rendering based on the target display parameters and scene data corresponding to the screen pixels, and a virtual scene with a fluid special effect and a depth sense is obtained.
In an embodiment, the rendering unit 305 comprises:
selecting a subunit: the method comprises the steps of selecting a target screen pixel from screen pixels according to depth information, wherein the target screen pixel corresponds to a close-up region of a virtual scene;
the cancellation subunit: the method comprises the steps of adjusting target display parameters of target screen pixels to eliminate the display effect of a fluid special effect in a close-range area;
a first scene rendering subunit: and rendering according to the adjusted target display parameters and scene data to obtain a virtual scene.
In an embodiment, the rendering unit 305 comprises:
mask map acquisition subunit: the method comprises the steps of obtaining a target special effect mask map for shielding a visible range of a game player in a virtual scene;
target parameter adjustment subunit: the method comprises the steps of adjusting target display parameters according to pixel values of pixels in a target special effect mask image;
a second scene rendering subunit: and the method is used for rendering based on the adjusted target display parameters and scene data to obtain a virtual scene with a fluid special effect and a depth feeling.
As can be seen from the above, the fluid special effect processing device according to the embodiment of the present application obtains scene data in the rendering buffer through the obtaining unit 301, and maps coordinates of screen pixels into the three-dimensional space according to lens information of the virtual lens in the scene data and depth information corresponding to the screen pixels; the distance calculation unit 302 calculates a target horizontal observation distance between the screen pixel and the virtual lens from the three-dimensional coordinates of the screen pixel in the three-dimensional space; the height determining unit 303 determines a target height of the screen pixel in the three-dimensional space based on the three-dimensional coordinates; the parameter determination unit 304 determines a target display parameter of the fluid special effect at the target horizontal observation distance and the target height based on a preset relationship between the display parameter of the fluid special effect and the horizontal observation distance and the height; the rendering unit 305 performs rendering based on the target display parameters and scene data corresponding to the screen pixels, resulting in a virtual scene with a fluid special effect and a sense of depth.
According to the embodiment of the application, the screen pixels are mapped into the three-dimensional space to obtain the target height of the screen pixels in the three-dimensional space and the target horizontal observation distance between the screen pixels and the virtual lens, so that the difference of the screen pixels in the distance can be increased, the difference of display parameters of each screen pixel about the fluid special effect obtained by adjusting the target horizontal observation distance and the target height is also increased, and the scene depth effect presented by the fluid special effect in the virtual scene is better.
Correspondingly, the embodiment of the application also provides computer equipment which can be a terminal. Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application, as shown in fig. 5. The computer device 500 includes a processor 501 having one or more processing cores, a memory 502 having one or more computer readable storage media, and a computer program stored on the memory 502 and executable on the processor. The processor 501 is electrically connected to the memory 502. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 501 is a control center of the computer device 500, connects various parts of the entire computer device 500 using various interfaces and lines, and performs various functions of the computer device 500 and processes data by running or loading software programs and/or modules stored in the memory 502, and calling data stored in the memory 502, thereby performing overall monitoring of the computer device 500.
In the embodiment of the present application, the processor 501 in the computer device 500 loads the instructions corresponding to the processes of one or more application programs into the memory 502 according to the following steps, and the processor 501 executes the application programs stored in the memory 502, so as to implement various functions:
acquiring scene data in a rendering buffer zone, and mapping coordinates of screen pixels into a three-dimensional space according to lens information of a virtual lens in the scene data and depth information corresponding to the screen pixels;
calculating a target horizontal observation distance between the screen pixel and the virtual lens according to the three-dimensional coordinates of the screen pixel in the three-dimensional space;
determining the target height of the screen pixel in the three-dimensional space according to the three-dimensional coordinates;
determining target display parameters of the fluid special effects under the target horizontal observation distance and the target height based on a preset relation between the display parameters of the fluid special effects and the horizontal observation distance and the height;
rendering is carried out based on the target display parameters corresponding to the screen pixels and scene data, and a virtual scene with a fluid special effect and a depth sense is obtained.
In an embodiment, the step of determining the target display parameter of the fluid effect at the target horizontal viewing distance and the target height based on the preset relationship between the display parameter of the fluid effect and the horizontal viewing distance and the height may include:
Acquiring scene layout information corresponding to a virtual scene, wherein the scene layout information comprises distribution positions of scene objects forming the virtual scene in a three-dimensional space and corresponding special effect heights;
determining the height of the target special effect corresponding to each screen pixel according to the distribution position and the three-dimensional coordinates of each screen pixel;
determining the distance display parameter of the fluid special effect for each screen pixel according to the corresponding relation between the display parameter and the horizontal observation distance in the preset relation and the target horizontal observation distance of each screen pixel;
and adjusting the distance display parameters according to the height relation between the target special effect height and the target height corresponding to each screen pixel to obtain the target display parameters of each screen pixel.
In one embodiment, the step of determining the target special effect height corresponding to each screen pixel according to the distribution position and the three-dimensional coordinates of each screen pixel may include:
determining a scene object where each screen pixel is located according to the three-dimensional coordinates and the distribution position of each screen pixel;
obtaining candidate special effect heights of each screen pixel according to the special effect height of the scene object where each screen pixel is located;
and adjusting the corresponding candidate special effect height according to the target horizontal observation distance of each screen pixel to obtain the corresponding target special effect height of each screen pixel.
In an embodiment, the step of adjusting the distance display parameter according to the height relationship between the target special effect height and the target height corresponding to each screen pixel to obtain the target display parameter of each screen pixel may include:
calculating the height difference between the target special effect height and the target height of each screen pixel;
and adjusting the distance display parameters according to the ratio of the height difference of each pixel to the target special effect height to obtain target display parameters.
In one embodiment, the fluid effect is a fog effect, and the display parameter includes at least one of fog concentration, fog color, and fog density.
In an embodiment, the step of rendering based on the target display parameters and scene data corresponding to the screen pixels to obtain a virtual scene with a fluid special effect and a depth sensation may include:
selecting a target screen pixel from the screen pixels according to the depth information, wherein the target screen pixel corresponds to a close-range region of the virtual scene;
adjusting target display parameters of target screen pixels to eliminate the display effect of the fluid special effect in a close-range area;
rendering is carried out according to the adjusted target display parameters and scene data, and a virtual scene is obtained.
In an embodiment, the step of rendering based on the target display parameters and scene data corresponding to the screen pixels to obtain a virtual scene with a fluid special effect and a depth sensation may include:
obtaining a target special effect mask map for shielding a visible range of a game player in a virtual scene;
adjusting the target display parameters according to the pixel values of the pixels in the target special effect mask image;
rendering is carried out based on the adjusted target display parameters and scene data, and a virtual scene with a fluid special effect and a depth sense is obtained.
According to the embodiment of the application, the scene data in the rendering buffer area is obtained, and the coordinates of the screen pixels are mapped into a three-dimensional space according to the lens information of the virtual lens in the scene data and the depth information corresponding to the screen pixels; calculating a target horizontal observation distance between the screen pixel and the virtual lens according to the three-dimensional coordinates of the screen pixel in the three-dimensional space; determining the target height of the screen pixel in the three-dimensional space according to the three-dimensional coordinates; determining target display parameters of the fluid special effects under the target horizontal observation distance and the target height based on a preset relation between the display parameters of the fluid special effects and the horizontal observation distance and the height; rendering is carried out based on the target display parameters corresponding to the screen pixels and scene data, and a virtual scene with a fluid special effect and a depth sense is obtained.
According to the embodiment of the application, the screen pixels are mapped into the three-dimensional space to obtain the target height of the screen pixels in the three-dimensional space and the target horizontal observation distance between the screen pixels and the virtual lens, so that the difference of the screen pixels in the distance can be increased, the difference of display parameters of each screen pixel about the fluid special effect obtained by adjusting the target horizontal observation distance and the target height is also increased, and the scene depth effect presented by the fluid special effect in the virtual scene is better.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 5, the computer device 500 further includes: a touch display screen 503, a radio frequency circuit 504, an audio circuit 505, an input unit 506, and a power supply 507. The processor 501 is electrically connected to the touch display 503, the radio frequency circuit 504, the audio circuit 505, the input unit 506, and the power supply 507, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 5 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display screen 503 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 503 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 501, and can receive commands from the processor 501 and execute them. The touch panel may cover the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 501 to determine the type of touch event, and the processor 501 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 503 to realize the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch sensitive display 503 may also implement an input function as part of the input unit 506.
The radio frequency circuitry 504 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuitry 505 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 505 may transmit the received electrical signal after audio data conversion to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 505 and converted into audio data, which are processed by the audio data output processor 501 for transmission to, for example, another computer device via the radio frequency circuit 504, or which are output to the memory 502 for further processing. The audio circuit 505 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 506 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 507 is used to power the various components of the computer device 500. Alternatively, the power supply 507 may be logically connected to the processor 501 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 507 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 5, the computer device 500 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform the steps in any of the virtual article marking methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
Acquiring scene data in a rendering buffer zone, and mapping coordinates of screen pixels into a three-dimensional space according to lens information of a virtual lens in the scene data and depth information corresponding to the screen pixels;
calculating a target horizontal observation distance between the screen pixel and the virtual lens according to the three-dimensional coordinates of the screen pixel in the three-dimensional space;
determining the target height of the screen pixel in the three-dimensional space according to the three-dimensional coordinates;
determining target display parameters of the fluid special effects under the target horizontal observation distance and the target height based on a preset relation between the display parameters of the fluid special effects and the horizontal observation distance and the height;
rendering is carried out based on the target display parameters corresponding to the screen pixels and scene data, and a virtual scene with a fluid special effect and a depth sense is obtained.
In an embodiment, the step of determining the target display parameter of the fluid effect at the target horizontal viewing distance and the target height based on the preset relationship between the display parameter of the fluid effect and the horizontal viewing distance and the height may include:
acquiring scene layout information corresponding to a virtual scene, wherein the scene layout information comprises distribution positions of scene objects forming the virtual scene in a three-dimensional space and corresponding special effect heights;
Determining the height of the target special effect corresponding to each screen pixel according to the distribution position and the three-dimensional coordinates of each screen pixel;
determining the distance display parameter of the fluid special effect for each screen pixel according to the corresponding relation between the display parameter and the horizontal observation distance in the preset relation and the target horizontal observation distance of each screen pixel;
and adjusting the distance display parameters according to the height relation between the target special effect height and the target height corresponding to each screen pixel to obtain the target display parameters of each screen pixel.
In one embodiment, the step of determining the target special effect height corresponding to each screen pixel according to the distribution position and the three-dimensional coordinates of each screen pixel may include:
determining a scene object where each screen pixel is located according to the three-dimensional coordinates and the distribution position of each screen pixel;
obtaining candidate special effect heights of each screen pixel according to the special effect height of the scene object where each screen pixel is located;
and adjusting the corresponding candidate special effect height according to the target horizontal observation distance of each screen pixel to obtain the corresponding target special effect height of each screen pixel.
In an embodiment, the step of adjusting the distance display parameter according to the height relationship between the target special effect height and the target height corresponding to each screen pixel to obtain the target display parameter of each screen pixel may include:
Calculating the height difference between the target special effect height and the target height of each screen pixel;
and adjusting the distance display parameters according to the ratio of the height difference of each pixel to the target special effect height to obtain target display parameters.
In one embodiment, the fluid effect is a fog effect, and the display parameter includes at least one of fog concentration, fog color, and fog density.
In an embodiment, the step of rendering based on the target display parameters and scene data corresponding to the screen pixels to obtain a virtual scene with a fluid special effect and a depth sensation may include:
selecting a target screen pixel from the screen pixels according to the depth information, wherein the target screen pixel corresponds to a close-range region of the virtual scene;
adjusting target display parameters of target screen pixels to eliminate the display effect of the fluid special effect in a close-range area;
rendering is carried out according to the adjusted target display parameters and scene data, and a virtual scene is obtained.
In an embodiment, the step of rendering based on the target display parameters and scene data corresponding to the screen pixels to obtain a virtual scene with a fluid special effect and a depth sensation may include:
Obtaining a target special effect mask map for shielding a visible range of a game player in a virtual scene;
adjusting the target display parameters according to the pixel values of the pixels in the target special effect mask image;
rendering is carried out based on the adjusted target display parameters and scene data, and a virtual scene with a fluid special effect and a depth sense is obtained.
According to the embodiment of the application, the scene data in the rendering buffer area is obtained, and the coordinates of the screen pixels are mapped into a three-dimensional space according to the lens information of the virtual lens in the scene data and the depth information corresponding to the screen pixels; calculating a target horizontal observation distance between the screen pixel and the virtual lens according to the three-dimensional coordinates of the screen pixel in the three-dimensional space; determining the target height of the screen pixel in the three-dimensional space according to the three-dimensional coordinates; determining target display parameters of the fluid special effects under the target horizontal observation distance and the target height based on a preset relation between the display parameters of the fluid special effects and the horizontal observation distance and the height; rendering is carried out based on the target display parameters corresponding to the screen pixels and scene data, and a virtual scene with a fluid special effect and a depth sense is obtained.
According to the embodiment of the application, the screen pixels are mapped into the three-dimensional space to obtain the target height of the screen pixels in the three-dimensional space and the target horizontal observation distance between the screen pixels and the virtual lens, so that the difference of the screen pixels in the distance can be increased, the difference of display parameters of each screen pixel about the fluid special effect obtained by adjusting the target horizontal observation distance and the target height is also increased, and the scene depth effect presented by the fluid special effect in the virtual scene is better.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The foregoing has described in detail the methods, apparatus, computer devices and computer storage media for specific fluid effects provided by the embodiments of the present application, and specific examples have been presented herein to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only for the purpose of aiding in the understanding of the methods and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (10)

1. A method of special effect treatment of a fluid, comprising:
acquiring scene data in a rendering buffer zone, and mapping coordinates of screen pixels into a three-dimensional space according to lens information of a virtual lens in the scene data and depth information corresponding to the screen pixels;
calculating a target horizontal observation distance between the screen pixel and the virtual lens according to the three-dimensional coordinates of the screen pixel in the three-dimensional space;
Determining the target height of the screen pixel in the three-dimensional space according to the three-dimensional coordinates;
determining a target display parameter of the fluid special effect under the target horizontal observation distance and the target height based on a preset relation between the display parameter of the fluid special effect and the horizontal observation distance and the height;
rendering is carried out based on the target display parameters and scene data corresponding to the screen pixels, and a virtual scene with the fluid special effect and the depth feeling is obtained.
2. The method of claim 1, wherein determining the target display parameters for the fluid effect at the target horizontal viewing distance and the target height based on the preset relationship between the display parameters for the fluid effect and the horizontal viewing distance and the height comprises:
acquiring scene layout information corresponding to the virtual scene, wherein the scene layout information comprises distribution positions of scene objects forming the virtual scene in the three-dimensional space and corresponding special effect heights;
determining the target special effect height corresponding to each screen pixel according to the distribution position and the three-dimensional coordinates of each screen pixel;
determining a distance display parameter of the fluid special effect for each screen pixel according to the corresponding relation between the display parameter and the horizontal observation distance in the preset relation and the target horizontal observation distance of each screen pixel;
And adjusting the distance display parameters according to the height relation between the target special effect height and the target height corresponding to each screen pixel to obtain the target display parameters of each screen pixel.
3. The method according to claim 2, wherein determining the target special effect height corresponding to each screen pixel according to the distribution position and the three-dimensional coordinates of each screen pixel comprises:
determining a scene object where each screen pixel is located according to the three-dimensional coordinates and the distribution positions of each screen pixel;
obtaining candidate special effect heights of each screen pixel according to the special effect height of the scene object where each screen pixel is located;
and adjusting the corresponding candidate special effect height according to the target horizontal observation distance of each screen pixel to obtain the corresponding target special effect height of each screen pixel.
4. The method according to claim 2, wherein the adjusting the distance display parameter according to the height relation between the target special effect height and the target height corresponding to each screen pixel to obtain the target display parameter of each screen pixel includes:
calculating the height difference between the target special effect height and the target height of each screen pixel;
And adjusting the distance display parameter according to the ratio between the height difference of each pixel and the target special effect height to obtain the target display parameter.
5. The method of claim 1, wherein the fluid effect is a fog effect and the display parameter comprises at least one of fog concentration, fog color, and fog density.
6. The method according to any one of claims 1-5, wherein rendering based on the target display parameters and scene data corresponding to the screen pixels to obtain a virtual scene with the fluid special effect and depth perception comprises:
selecting a target screen pixel from the screen pixels according to the depth information, wherein the target screen pixel corresponds to a close-up region of the virtual scene;
adjusting target display parameters of the target screen pixels to eliminate the display effect of the fluid special effect in the near-scene area;
rendering is carried out according to the adjusted target display parameters and the scene data, and the virtual scene is obtained.
7. The method according to any one of claims 1-5, wherein rendering based on the target display parameters and scene data corresponding to the screen pixels to obtain a virtual scene with the fluid special effect and depth perception comprises:
Obtaining a target special effect mask map for shielding a visible range of a game player in a virtual scene;
adjusting the target display parameters according to the pixel values of the pixels in the target special effect mask image;
rendering is carried out based on the adjusted target display parameters and the scene data, and a virtual scene with the fluid special effect and the depth feeling is obtained.
8. A fluid special effect processing apparatus, comprising:
the device comprises an acquisition unit, a rendering buffer area and a display unit, wherein the acquisition unit is used for acquiring scene data in the rendering buffer area and mapping coordinates of screen pixels into a three-dimensional space according to lens information of a virtual lens in the scene data and depth information corresponding to the screen pixels;
a distance calculation unit for calculating a target horizontal observation distance between the screen pixel and the virtual lens according to a three-dimensional coordinate of the screen pixel in the three-dimensional space;
a height determining unit for determining a target height of the screen pixel in the three-dimensional space according to the three-dimensional coordinates;
a parameter determining unit configured to determine a target display parameter of a fluid special effect at the target horizontal observation distance and the target height based on a preset relationship between the display parameter of the fluid special effect and the horizontal observation distance and the height;
And the rendering unit is used for rendering based on the target display parameters and scene data corresponding to the screen pixels to obtain the virtual scene with the fluid special effect and the depth sense.
9. A computer device comprising a memory and a processor; the memory stores a computer program, and the processor is configured to execute the computer program in the memory to perform the fluid special effect processing method according to any one of claims 1 to 7.
10. A computer readable storage medium for storing a computer program, the computer program being loaded by a processor to perform the fluid special effect processing method of any one of claims 1 to 7.
CN202311085472.7A 2023-08-25 2023-08-25 Fluid special effect processing method, device, computer equipment and storage medium Pending CN117065330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311085472.7A CN117065330A (en) 2023-08-25 2023-08-25 Fluid special effect processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311085472.7A CN117065330A (en) 2023-08-25 2023-08-25 Fluid special effect processing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117065330A true CN117065330A (en) 2023-11-17

Family

ID=88703934

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311085472.7A Pending CN117065330A (en) 2023-08-25 2023-08-25 Fluid special effect processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117065330A (en)

Similar Documents

Publication Publication Date Title
CN108619721B (en) Distance information display method and device in virtual scene and computer equipment
CN111760288B (en) Method, device, terminal and storage medium for displaying direction in virtual three-dimensional scene
WO2019153750A1 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
CN112263837A (en) Weather rendering method, device, equipment and storage medium in virtual environment
CN111414080B (en) Method, device and equipment for displaying position of virtual object and storage medium
US11798223B2 (en) Potentially visible set determining method and apparatus, device, and storage medium
CN110827391B (en) Image rendering method, device and equipment and storage medium
CN112245926B (en) Virtual terrain rendering method, device, equipment and medium
CN111589141B (en) Virtual environment picture display method, device, equipment and medium
JP2022552306A (en) VIRTUAL CHARACTER CONTROL METHOD, APPARATUS AND DEVICE IN VIRTUAL ENVIRONMENT
CN112907716B (en) Cloud rendering method, device, equipment and storage medium in virtual environment
WO2022227915A1 (en) Method and apparatus for displaying position marks, and device and storage medium
CN112489179B (en) Target model processing method and device, storage medium and computer equipment
JP7186901B2 (en) HOTSPOT MAP DISPLAY METHOD, DEVICE, COMPUTER DEVICE AND READABLE STORAGE MEDIUM
CN112169330B (en) Method, device, equipment and medium for displaying picture of virtual environment
US20230082928A1 (en) Virtual aiming control
CN112884873B (en) Method, device, equipment and medium for rendering virtual object in virtual environment
CN112950753B (en) Virtual plant display method, device, equipment and storage medium
CN115501590A (en) Display method, display device, electronic equipment and storage medium
CN117065330A (en) Fluid special effect processing method, device, computer equipment and storage medium
CN113426131B (en) Picture generation method and device of virtual scene, computer equipment and storage medium
CN115810077A (en) Virtual object rendering method and device, computer equipment and storage medium
CN113058266B (en) Method, device, equipment and medium for displaying scene fonts in virtual environment
CN112562051B (en) Virtual object display method, device, equipment and storage medium
CN113633976B (en) Operation control method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination