CN113426131A - Virtual scene picture generation method and device, computer equipment and storage medium - Google Patents

Virtual scene picture generation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113426131A
CN113426131A CN202110750124.1A CN202110750124A CN113426131A CN 113426131 A CN113426131 A CN 113426131A CN 202110750124 A CN202110750124 A CN 202110750124A CN 113426131 A CN113426131 A CN 113426131A
Authority
CN
China
Prior art keywords
unit
target
grid
visual
basic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110750124.1A
Other languages
Chinese (zh)
Other versions
CN113426131B (en
Inventor
唐竟人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Chengdu Co Ltd
Original Assignee
Tencent Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Chengdu Co Ltd filed Critical Tencent Technology Chengdu Co Ltd
Priority to CN202110750124.1A priority Critical patent/CN113426131B/en
Publication of CN113426131A publication Critical patent/CN113426131A/en
Application granted granted Critical
Publication of CN113426131B publication Critical patent/CN113426131B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a picture generation method and device of a virtual scene, computer equipment and a storage medium, and relates to the technical field of virtual scenes. The method comprises the following steps: acquiring position information of an exploration source in a virtual scene; locating a target grid cell from a plurality of the grid cells based on location information of the exploration source; locating a target base unit from at least two base units included in the target grid unit; determining each visual basic unit corresponding to the exploration source from the basic units around the target basic unit; generating a scene picture of the virtual scene based on the visual base unit; according to the scheme, the accuracy of the edge position of the visual field area is improved while the calculation efficiency of the visual field area is guaranteed, and the display effect of the game picture is improved.

Description

Virtual scene picture generation method and device, computer equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of virtual scenes, in particular to a method and a device for generating a picture of a virtual scene, computer equipment and a storage medium.
Background
Multiplayer Online Battle Arena (MOBA) games generally display game scene pictures based on a certain visual field mechanism.
The horizon mechanism in MOBA-type games is typically implemented based on meshing the game scene. For example, a game developer divides a game scene into a plurality of square grid cells in advance, calculates a certain range of grid cells around a virtual unit where a user is camping as a grid cell visible to the user during a game, and displays other framed virtual units in the grid cells visible to the user in a game picture.
However, in order to ensure the computational efficiency of the visual field area, the density of the grid cells in the game scene is usually not high, which results in a large size of each grid cell, and thus results in a low accuracy of the edge position of the visual field area, which affects the display effect of the game picture.
Disclosure of Invention
The embodiment of the application provides a picture generation method and device for a virtual scene, computer equipment and a storage medium, which can improve the accuracy of the edge position of a visual field area while ensuring the calculation efficiency of the visual field area, thereby improving the display effect of a game picture. The technical scheme is as follows:
in one aspect, a method for generating a picture of a virtual scene is provided, the method including:
acquiring position information of an exploration source in a virtual scene; the exploration source is a virtual object with a corresponding view distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each grid cell is divided into at least two basic cells;
locating a target grid cell from a plurality of the grid cells based on location information of the exploration source; the target grid unit is the grid unit where the exploration source is located;
locating a target basic unit from at least two basic units contained in the target grid unit based on the position information of the target grid unit and the position information of the exploration source; the target basic unit is a basic unit where the exploration source is located;
determining each visual basic unit corresponding to the exploration source from basic units around the target basic unit based on the visual field distance and the position information of the target basic unit;
generating a scene picture of the virtual scene based on the visual base unit; in the scene picture, a designated virtual object in the visual basic unit is in a visible state, and the designated virtual object is a virtual object which does not belong to the same camp as the exploration source.
In another aspect, there is provided a picture generation apparatus of a virtual scene, the apparatus including:
the position information acquisition module is used for acquiring the position information of the exploration source in the virtual scene; the exploration source is a virtual object with a corresponding view distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each grid cell is divided into at least two basic cells;
a grid cell positioning module for positioning a target grid cell from the plurality of grid cells based on the position information of the exploration source; the target grid unit is the grid unit where the exploration source is located;
a target basic unit positioning module, configured to position a target basic unit from at least two basic units included in the target grid unit based on the location information of the target grid unit and the location information of the exploration source; the target basic unit is a basic unit where the exploration source is located;
a basic unit determining module, configured to determine, based on the distance of field of view and the position information of the target basic unit, each visual basic unit corresponding to the exploration source from basic units around the target basic unit;
a picture generation module for generating a scene picture of the virtual scene based on the visual base unit; in the scene picture, a designated virtual object in the visual basic unit is in a visible state, and the designated virtual object is a virtual object which does not belong to the same camp as the exploration source.
In one possible implementation, in response to the grid cell being a square grid, the base cell is a triangular grid divided by two diagonals of the grid cell,
the target base unit positioning module is used for,
acquiring the distances from the exploration source to four edges of the target grid unit based on the position information of the target grid unit and the position information of the exploration source;
locating the target base unit from at least two base units included in the target grid unit based on distances from the exploration source to four edges of the target grid unit.
In a possible implementation manner, when the target basic unit is located from at least two basic units included in the target grid cell based on the distances from the exploration source to the four edges of the target grid cell, the target basic unit locating module is configured to,
comparing the distances from the exploration source to the four edges of the target grid unit in pairs to obtain the magnitude relation between the distances from the exploration source to the four edges of the target grid unit;
based on the size relationship, locating the target base unit from at least two base units included in the target grid unit.
In one possible implementation, in response to the grid cell being a square grid, the base cell being a triangular grid divided by a connecting line between a center point of the square and each side of the square, the target base cell location module is configured to,
acquiring the coordinates of the central point of the target grid unit based on the position information of the target grid unit;
acquiring a connection line included angle based on the central point coordinate of the target grid unit and the position information of the exploration source; the line included angle is an included angle between a connecting line between the central point of the target grid unit and the exploration source and a reference line;
and positioning a target basic unit from at least two basic units contained in the target grid unit based on the line included angle.
In one possible implementation, in response to the grid cell being a square grid, the base cell being a triangular grid divided by a connecting line between a center point of the square and each side of the square, the target base cell location module is configured to,
acquiring coordinates of respective center points of at least two basic units contained in the target grid unit based on the position information of the target grid unit;
acquiring distances from the central points of the at least two basic units contained in the target grid unit to the exploration source respectively based on the coordinates of the central points of the at least two basic units contained in the target grid unit and the position information of the exploration source;
and locating a target basic unit from the at least two basic units contained in the target grid unit based on the distance between the central point of each of the at least two basic units contained in the target grid unit and the exploration source.
In one possible implementation, the base unit determining module is configured to,
acquiring a visual range of the exploration source based on the visual range distance and the position information of the target basic unit;
and traversing each basic unit in the visual range to determine the visual basic unit.
In one possible implementation, the search source is obtained based on the visual range distance and the position information of the target base unit, and the base unit determination module is configured to,
acquiring position information of each obstacle in the virtual scene;
and acquiring the visual range of the exploration source based on the visual range distance, the position information of the target basic unit and the position information of each obstacle.
In one possible implementation, the apparatus further includes:
the assignment module is used for setting state assignment for the visual basic unit before the scene generation module generates the scene picture of the virtual scene based on the visual basic unit, and the state assignment is decreased gradually along with time;
and the picture generation module is used for responding to the condition assignment of the visual basic unit not being decreased to 0, and generating the scene picture of the virtual scene based on the visual basic unit.
In one possible implementation, the assignment module is configured to,
setting the state assignment of the visual basic unit as an initial value in response to the current state assignment of the visual basic unit being 0;
resetting the state assignment of the visual base unit to the initial value in response to the current state assignment of the visual base unit not being 0.
In one possible implementation manner, the picture generation module is configured to,
generating a global view map; the global view map is used for indicating whether each basic unit in the virtual scene is in a visible state or not; and the visual base unit is in a visual state in the global view map;
and generating a scene picture of the virtual scene based on the global view map.
In one possible implementation, the apparatus further includes:
the grid unit determining module is used for determining grid units corresponding to the fog-masking effect in the virtual scene based on the global visual field map;
and the fog-mixing rendering module is used for rendering the fog-mixing effect in the scene picture of the virtual scene based on the grid unit corresponding to the fog-mixing effect in the virtual scene.
In one possible implementation, the grid cell determination module is configured to,
acquiring state information of at least two basic units in the first grid unit based on the global view map, wherein the state information is used for indicating whether the corresponding basic units are in a visible state or not; the first grid cell is any one grid cell in the virtual scene;
in response to the number of base cells in the first grid cell that are not in a visible state reaching a number threshold, determining the first grid cell as a grid cell corresponding to a fog-shedding effect.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one computer program, and the at least one computer program is loaded and executed by the processor to implement the picture generation method for a virtual scene according to the above aspect.
In another aspect, the present application provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is loaded and executed by a processor to implement the picture generation method for a virtual scene according to the above aspect.
In another aspect, embodiments of the present application provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the screen generating method of the virtual scene according to the above aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the virtual scene is divided in advance according to grid units, the grid units are further divided into at least two basic units, when the visual field range in the virtual scene is determined, the grid unit where the exploration source is located is determined firstly, then the basic unit where the exploration source is located is determined based on the grid unit where the exploration source is located, the visual basic unit of the exploration source is determined based on the basic unit where the exploration source is located, the scene picture of the virtual scene is generated based on the visual basic unit of the exploration source, the accuracy of the edge position of the visual field area can be ensured because the basic unit has smaller size compared with the grid unit, meanwhile, in the process of calculating the visual field area, a two-stage positioning mode is adopted when the basic unit where the exploration source is located is positioned, namely the grid unit is positioned firstly, and then the basic unit is further positioned in the grid unit, in the process, compared with a scheme of directly positioning the basic unit where the search source is located from all the basic units, the complexity of positioning can be greatly simplified, and further the complexity of calculation of the visual field area is simplified.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for generating a picture of a virtual scene according to an embodiment of the present application;
fig. 2 to 4 are schematic views of three visual field regions according to an embodiment of the present application;
fig. 5 is a flowchart of a method for generating a picture of a virtual scene according to an embodiment of the present application;
fig. 6 to 8 are schematic diagrams of three grid cell divisions involved in the embodiment shown in fig. 5;
FIG. 9 is a schematic diagram of the grid cell coordinate relationship involved in the embodiment shown in FIG. 5;
FIG. 10 is a schematic diagram of the exploration source location according to the embodiment shown in FIG. 5;
FIG. 11 is a schematic view of the angular division involved in the embodiment of FIG. 5;
fig. 12 to 14 are schematic diagrams of images of a scene according to the embodiment shown in fig. 5;
FIG. 15 is a schematic view of the field of view and fog calculation process involved in the embodiment of FIG. 5;
FIG. 16 is an original diagram of rendering a fog according to the embodiment shown in FIG. 5;
fig. 17 is a block diagram illustrating a structure of a screen generating apparatus for a virtual scene according to an embodiment of the present application;
fig. 18 is a block diagram of a computer device according to another embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Referring to fig. 1, a flowchart of a screen generation method for a virtual scene according to an exemplary embodiment of the present application is shown. The method may be executed by a computer device, where an application program for generating and displaying a virtual scene runs in the computer device, for example, the computer device may be a terminal running a virtual scene client, or the computer device may also be a background server corresponding to the virtual scene client running in the terminal. As shown in fig. 1, the method may include the steps of:
step 101, acquiring position information of an exploration source in a virtual scene; the exploration source is a virtual object with a corresponding view distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each of the grid cells is divided into at least two base cells.
The virtual scene refers to a virtual scene displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene. Optionally, the virtual scene is also used for virtual scene engagement between at least two virtual characters. Optionally, the virtual scene has virtual resources available for at least two virtual characters. Optionally, the virtual scene includes that the virtual world includes a square/rectangular map, the square/rectangular map includes a symmetric lower left corner region and an upper right corner region, virtual characters belonging to two enemy camps occupy one of the regions respectively, and a target building/site/base/crystal deep in the other region is destroyed to serve as a winning target.
The exploration source may be a virtual object (or referred to as a virtual object) belonging to the camp where the user is located in the virtual scene, for example, the exploration source may be a virtual character/virtual character controlled by the current user, a virtual character/virtual character controlled by another user/Artificial Intelligence (AI) in the camp where the current user is located, a virtual building in the camp where the current user is located, a virtual summons of the camp where the current user is located, a virtual prop (for example, a virtual sentry for exploring a surrounding view field) placed in the virtual scene by the virtual character/virtual character in the camp where the current user is located, and the like. Optionally, the virtual summons include, but are not limited to, virtual summons triggered by virtual characters or virtual characters controlled by the user/AI (for example, skills and virtual ammunition), virtual summons automatically generated in a virtual scene and belonging to the current user camping (for example, virtual soldiers), and the like.
The exploration sources have respective view distances in the virtual scene, that is, in the absence of occlusion, virtual objects of other camps within the view distance of the exploration sources are visible to the camps where the current user is located.
In the virtual scene, the viewing distances of different search sources may be the same or different. For example, virtual buildings and virtual objects typically have a large field of view distance (e.g., 200 units of distance), while virtual props typically have a small field of view distance (e.g., 150 units of distance). For another example, the same exploration source may have different view distances when located at different positions, for example, a virtual object may have different view distances when located at a high position and a low position, for example, a virtual object may have a view distance of 200 units when located at a high position and a view distance of 150 units when moved to a flat position.
102, positioning a target grid cell from a plurality of grid cells based on the position information of the exploration source; the target grid cell is the grid cell where the exploration source is located.
In the embodiment of the present application, the position information of the search source may be coordinates of the search source in the virtual scene.
103, positioning a target basic unit from at least two basic units contained in the target grid unit based on the position information of the target grid unit and the position information of the exploration source; the target base unit is the base unit where the exploration source is located.
In the embodiment of the present application, the position information of the grid cell may be position information of a specified point (such as a lower left corner point of a square) in the grid cell in the virtual scene. Based on the position information of the specified point, and the size information (such as the side length) of the grid cell, the position information of other positions in the grid cell can be determined. Alternatively, the position information of the mesh unit may include position information of each vertex of the mesh unit in the virtual scene. Alternatively, the position information of the grid cell may also include position information of each edge of the grid cell in the virtual scene, and the like. The embodiment of the present application does not limit the form of the location information of the grid cell.
In the embodiment of the present application, the virtual scene is divided into a plurality of grid cells in advance, and further, each grid cell is also divided into two or more basic cells in advance. When the basic unit where the exploration source is located is calculated, the target grid unit where the exploration source is located, and then the basic unit where the exploration source is located is determined in the target grid unit.
And 104, determining each visual basic unit corresponding to the exploration source from the basic units around the target basic unit based on the visual field distance and the position information of the target basic unit.
Wherein, the visual basic unit refers to a basic unit which can be searched by a search source; or virtual objects of other camps in the visual base unit are visible to the camps where the exploration source is located.
In the embodiment of the present application, after the target base unit where the exploration source is located, the visual base unit that the exploration source can explore can be determined from the base units around the target base unit and within the visual range of the exploration source by combining the visual range of the exploration source.
105, generating a scene picture of a virtual scene based on the visual basic unit; in the scene picture, a designated virtual object in the visual basic unit is in a visible state, and the designated virtual object is a virtual object which does not belong to the same camp as the exploration source.
In the embodiment of the application, each virtual object in the virtual scene is divided into at least two camps. When a scene picture of a virtual scene is generated based on a visual basic unit, if a virtual object in other camps except for the camps where the exploration source is located exists in the visual basic unit, the virtual object in the other camps can be visible to each user in the camps where the exploration source is located, that is, the virtual object in the other camps exists in the generated scene picture. Optionally, when the virtual object in other camps is located in other base units other than the visible base unit, the virtual object in the other camps is invisible to each user in the camps where the exploration source is located.
In a possible implementation manner, the specified virtual object may be a virtual object in a non-hidden state (such as currently not hidden by skills/props). For example, if a virtual object in another camp exists in the visual base unit and the virtual object in the other camp is in a non-hidden state, the virtual object in the other camp can be visible to each user in the camp where the exploration source is located; correspondingly, if the virtual object in other camps is in a hidden state, even if the virtual object in the other camps is located in the visual base unit, the virtual object in the other camps may not be visible to each user in the camps where the exploration source is located.
In a possible implementation manner, when the search source does not have a function of detecting a virtual object in a hidden state, the specified virtual object may be a virtual object in a non-hidden state. Correspondingly, if the virtual objects in other camps exist in the visual base unit and the virtual objects in the other camps are in the hidden state, when the exploration source has the function of detecting the virtual objects in the hidden state, the virtual objects in the other camps are also visible to each user in the camps where the exploration source is located.
The virtual object that does not belong to the same camp as the search source may be a virtual object of an enemy camp of the search source or a virtual object of a neutral camp.
To sum up, in the solution shown in the embodiment of the present application, a virtual scene is divided in advance according to grid cells, and then the grid cells are further divided into at least two basic cells, when a visual field range in the virtual scene is determined, the grid cell where an exploration source is located is determined first, then the basic cell where the exploration source is located is determined based on the grid cell where the exploration source is located, and the visual basic cell of the exploration source is determined based on the basic cell where the exploration source is located, and a scene picture of the virtual scene is generated based on the visual basic cell of the exploration source, because the basic cell has a smaller size than the grid cell, the accuracy of the edge position of the visual field area can be ensured, and meanwhile, because a two-stage positioning manner is adopted when the basic cell where the exploration source is located in the process of calculating the visual field area, that is, the grid cell is located first, and then, further positioning the basic unit in the grid unit, and in the process, compared with a scheme of directly positioning the basic unit where the search source is located from all the basic units, greatly simplifying the complexity of positioning and further simplifying the complexity of calculating the visual field area.
Taking a game scene of the present scheme applied to a MOBA game as an example, please refer to fig. 2 to fig. 4, which show schematic diagrams of three view areas related to the embodiment of the present application.
It is assumed that the area where the virtual grass is located should be outside the view area of the exploration source, and the other areas outside the virtual grass should be within the view area of the exploration source. In fig. 2, when the computer device running the game directly uses the grid cell as the minimum visual field area calculation unit, the minimum visual field area calculated by the computer device is the grid cell, that is, in fig. 2, the non-visual area (e.g., the area 21 filled in fig. 2) and the visual area (e.g., the area other than the non-visual area in fig. 2) are respectively composed of several complete square grid cells. Due to the fact that the square grid cells are large in size and the positions of the virtual lawns are not regularly distributed, a part of the non-visible area (such as the area where the grid cell 22 in fig. 2 is located) actually contains a large part of the area of the non-virtual lawns, and correspondingly, a part of the visible area (such as the area where the grid cell 23 in fig. 2 is located) actually contains a part of the area of the virtual lawns, so that the accuracy of the edge positions of the visual field area is poor. On one hand, if the size of the grid unit is selected to be reduced to improve the precision, if the grid unit needs to be reduced to a very small granularity to achieve sufficient precision, the computational complexity of exploration source positioning is high, and the computational performance and the storage performance of computer equipment cannot be supported, and on the other hand, if the reduction strength of the grid unit is not enough, the problem that the edge position of a visual field area is poor still cannot be solved.
In fig. 3 and 4, the computer device performs two-level division on the virtual scene; the first level is divided by grid units, and a virtual scene is divided into a plurality of square grid units; and the second level is divided by the base unit, and each grid unit is further divided into 4 triangular base units. On one hand, when the computer equipment positions the exploration source, the computer equipment firstly positions from the dimension of the grid unit, determines the grid unit where the exploration source is located, and then further positions and obtains the basic unit where the exploration source is located in 4 basic units contained in the positioned grid unit, so that compared with a scheme of directly positioning and obtaining the basic unit where the exploration source is located from a large number of basic units, the complexity of the exploration source positioning can be greatly simplified; on the other hand, when the computer performs the calculation of the visual field area, the visual field area and the non-visual field area are distinguished by taking the basic unit as a unit, so that the edge progress of the visual field area is higher, for example, in fig. 3 and 4, the size of the non-visual field area (such as the area 31 filled in fig. 3 and the area 41 filled in fig. 4) and the minimum unit of the visual field area is only one fourth of the size of the grid unit, so that the precision of the edge of the visual field area can be greatly improved. For example, in contrast to fig. 2, in the grid cell 32 in fig. 3, only two triangular base cells corresponding to the virtual grass are divided into the non-visible region, and correspondingly, in the grid cell 33 in fig. 3, only two triangular base cells not corresponding to the virtual grass are divided into the visible region. The situation in fig. 4 is similar to that in fig. 3, and is not described again here.
Referring to fig. 5, a flowchart of a screen generation method for a virtual scene according to an exemplary embodiment of the present application is shown. The method may be executed by a computer device, where an application program for generating and displaying a virtual scene runs in the computer device, for example, the computer device may be a terminal running a virtual scene client, or the computer device may also be a background server corresponding to the virtual scene client running in the terminal. As shown in fig. 5, the method may include the steps of:
step 501, acquiring position information of an exploration source in a virtual scene; the exploration source is a virtual object with a corresponding view distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each of the grid cells is divided into at least two base cells.
In the embodiment of the present application, the grid cells may be square, rectangular, diamond-shaped, or other area cells with a large size; the above-mentioned base unit may be an area unit of an arbitrary shape divided from the mesh units as long as at least two base units can constitute one mesh unit.
The shape of each of the base units may be the same, or the shape of each of the base units may be different.
The sizes of the respective base units may be the same, or the sizes of the respective base units may be different.
In one possible implementation, at least one side of the basic unit is not parallel to each side of the grid unit.
Since many obstacles (such as virtual grass) may exist in the virtual scene, and the obstacles may obstruct the view of the exploration source, the shape of the obstacles is usually irregular, and the shape of the grid cells is usually more regular, which results in poor matching between the edges of the grid cells and the edges of some obstacles. In order to improve the matching degree between the edges of the base unit and the obstacle, and thus improve the display effect of the edges of the visual field area, in the embodiment of the present application, at least one edge of the base unit divided from the grid unit may be non-parallel to each edge of the grid unit, so that the matching degree between the base unit and the edges of the obstacle is improved by the edges of the base unit in different extending directions.
For example, please refer to fig. 6 to 8, which show schematic diagrams of three grid cell divisions according to an embodiment of the present application.
As shown in fig. 6, the mesh unit may be a square, and the base unit may be two triangular meshes divided by one diagonal line 61 of the square.
As shown in fig. 7, the basic unit may be 4 triangular meshes divided by two diagonal lines (a diagonal line 71 and a diagonal line 72) of a square.
The basic unit can also be at least two polygonal meshes divided by connecting lines between the central point of the square and each side of the square; for example, as shown in fig. 8, the basic unit is 8 triangular meshes divided by connecting lines between the center point 81 of the square and each edge and four vertices of the positive direction.
The embodiment of the present application is only illustrated by the dividing method of the grid cells shown in fig. 6 to 8, but the dividing method of the grid cells is not limited.
Step 502, positioning a target grid cell from a plurality of grid cells based on the position information of the exploration source; the target grid cell is the grid cell where the exploration source is located.
Taking a scene with a square or rectangular virtual scene and a square grid unit as an example, each grid unit is arranged in the virtual scene according to M rows and N columns. When the computer equipment positions a target grid unit from a plurality of grid units, a certain vertex of a virtual scene is taken as a coordinate origin, two edges corresponding to the vertex are taken as coordinate axes, a plane rectangular coordinate system is established, the coordinates of a search source are the coordinates of the search source in the plane rectangular coordinate system, and the row number and the column number of the target grid unit can be obtained by dividing the horizontal coordinate value and the vertical coordinate value of the search source by the side length of the grid unit respectively.
For example, assuming that the side length of the target grid cell is 10, and the coordinates of the exploration source in the virtual scene are (1255, 863), the abscissa 1255 of the exploration source is divided by 10, and the result is 125 and 5, the target grid cell is located in the 126 th column; correspondingly, the ordinate 863 of the exploration source is divided by 10, and the result is 86 and 3, and the target grid cell is located in the 87 th row; that is, the grid cell of the 87 th row and the 126 th column in the virtual scene is the target grid cell where the exploration source is located. Based on the number of rows and columns of the target grid cell, the computer device may further determine an index or coordinates of the target grid cell.
Step 503, positioning a target basic unit from at least two basic units contained in the target grid unit based on the position information of the target grid unit and the position information of the exploration source; the target base unit is the base unit where the exploration source is located.
In the embodiment of the present application, one grid cell may be represented by an index or a coordinate of the grid cell in the virtual scene. Wherein, based on the index of the grid cell, the position information (such as coordinates) of the grid cell in the virtual scene can be queried or calculated.
The coordinates of a grid cell may be the coordinates of a certain specified point of the grid cell (e.g., the bottom left vertex of a square grid cell), and the coordinates may be combined with the size information of the grid cell (e.g., the side length of the square), i.e., the position of the grid cell in the virtual scene may be determined.
In this embodiment, after the computer device determines the target grid cell, it may determine at least two basic cells included in the target grid cell according to the position information of the target grid cell, for example, determine respective indexes or coordinates of the at least two basic cells included in the target grid cell.
The method for determining at least two basic units included in the target grid unit according to the position information of the target grid unit can be realized by a coordinate conversion mode (or a coordinate conversion formula) performed between the target grid unit and at least two basic units in the target grid unit.
The coordinate transformation is the basis of the subsequent calculation, and therefore there is a need to clarify the transformation formula of the base cell coordinates to the grid cell coordinates (which may also be referred to as scene coordinates), and the transformation formula of the grid cell coordinates to the base cell index/coordinates.
When splitting a scene into large grid cells, the developer can set the following known information: coordinate starting point p (o) of grid cell in virtual scenex,oy) The number of the two-dimensional grid units is m multiplied by n, and the side length of each large grid unit is L.
Please refer to fig. 9, which illustrates a grid cell coordinate relationship diagram according to an embodiment of the present application. As shown in fig. 9, the mesh unit 91 is divided into 4 triangular basic units by two diagonal lines, and it is derived that the obtained mesh unit and the basic unit have the following relationship.
As shown in the figure, the barycentric coordinates a (a ', b ') of the four triangular basic units in the virtual scene can be obtained from the starting coordinates p ' (a ', b ') of the mesh units in the virtual scene1,b1) Taking A as an example:
a1=a′+L/6
b1=b′+L/2
likewise, the coordinates in the virtual scene may be obtained B, C, D.
From the above, if the index (x) of the base unit of the triangle is knownt,yt) The index (x) of the corresponding grid cell can be locatedg,yg). Wherein, index (x)g,yg) Representing a grid cell as the xth in a virtual scenegColumn No. ygThe grid cells of a row, accordingly, the scene coordinates p ' (a ', b ') of the grid cells are:
a′=ox+xg×L
b′=oy+yg×L
wherein (o)x,oy) As the origin coordinates.
And then, the barycentric coordinates of the triangular basic units in the virtual scene can be calculated according to the known conditions, so that the conversion from the index to the scene coordinates is realized.
When converting the scene coordinates to the index of the triangle base unit, the scene coordinates p (x, y) may be converted into the index of the corresponding grid unit (x)g,yg)。
Figure BDA0003145846920000141
Figure BDA0003145846920000142
In one possible implementation, in response to the grid cell being a square grid, the base unit being a triangular grid divided by two diagonal lines of the grid cell, the locating a target base unit from among at least two base units included in the target grid cell based on the position information of the target grid cell and the position information of the exploration source comprises:
based on the position information of the target grid unit and the position information of the exploration source, obtaining the distances from the exploration source to the four edges of the target grid unit;
and locating the target base unit from at least two base units contained in the target grid unit based on the distances from the exploration source to the four edges of the target grid unit.
Please refer to fig. 10, which illustrates a schematic diagram of a search source location according to an embodiment of the present application. As shown in fig. 10, the grid cell is divided into 4 triangular basic cells by two diagonal lines, and when the search source 1001 is in different basic cells in the grid cell, the distance distribution between the search source 1001 and the four sides of the grid cell (Δ a, Δ B, Δ C, Δ D) is also different, so that the basic cell in which the search source 1001 is located between the four basic cells can be determined by the distances between the search source 1001 and the four sides of the grid cell.
In one possible implementation, the locating the target base unit from at least two base units included in the target grid unit based on distances from the exploration source to four edges of the target grid unit includes:
comparing the distances from the exploration source to the four edges of the target grid unit in pairs to obtain the magnitude relation between the distances from the exploration source to the four edges of the target grid unit;
based on the size relationship, the target base unit is located from at least two base units included in the target grid unit.
As shown in fig. 10, after obtaining the coordinates of the grid cell in the virtual scene, the computer device may obtain two-dimensional coordinates of four sides of the grid cell according to the side length of the grid cell, and may derive from the characteristics of the basic cell division manner shown in fig. 10: and the target basic unit where the exploration source is located is the basic unit of the side corresponding to the shortest distance in the distances from the coordinate point to the four sides of the grid unit. Therefore, by comparing the magnitude relationship between Δ a, Δ B, Δ C, and Δ D, it can be determined that Δ a is minimum, and the exploration source is located in the base cell on the left side in the grid cell shown in fig. 10.
In one possible implementation, in response to the grid unit being a square grid, the base unit being a polygonal grid divided by a connecting line between a center point of the square and each side of the square, the locating a target base unit from among at least two base units included in the target grid unit based on the position information of the target grid unit and the position information of the exploration source comprises:
acquiring the coordinates of the central point of the target grid unit based on the position information of the target grid unit;
acquiring a connection included angle based on the central point coordinate of the target grid unit and the position information of the exploration source; the included angle of the connecting line is the included angle between the connecting line between the central point of the target grid unit and the exploration source and the reference line;
and based on the connecting line angle, positioning a target basic unit from at least two basic units contained in the target grid unit.
In the embodiment of the present application, when the grid unit is a square grid and the base unit is a polygonal grid divided by a central point of the square and a connecting line on each side of the square, a boundary of each polygonal grid is a connecting line between the central point of the square and each side of the square, that is, if a reference line is given, an included angle between the central points of the corresponding squares in each polygonal grid forms an angle of 360 ° and is not overlapped with each other, in other words, if the central point of the square is used as an origin, for any point in the square, if an included angle between the point and the central point of the square is known, it can be determined in which angular range of the polygonal grid the point is located. Based on the above principle, the computer device may determine the target base unit by exploring the angle between the line connecting the source and the center point of the square and the reference line.
Please refer to fig. 11, which illustrates a schematic diagram of angle division according to an embodiment of the present application. As shown in fig. 11, taking a reference line as an x-axis as an example, the base unit is 8 triangular meshes partitioned by connecting a center point of the square with each edge midpoint and four vertices of the square respectively, where an angle range corresponding to the triangular mesh 1101 is 0 ° to 45 °, an angle range corresponding to the triangular mesh 1102 is 45 ° to 90 °, and so on, if the computer device calculates a connecting line between the search source 1103 and the origin, and an angle between the connecting line and an abscissa axis is 30 °, it can be determined that the search source 1103 is located in the triangular mesh 1101.
In one possible implementation, in response to the grid cell being a square grid, the base cell being a triangular grid divided by a connecting line between a center point of the square and each side of the square, the locating a target base cell from at least two base cells included in the target grid cell based on the location information of the target grid cell and the location information of the exploration source comprises:
acquiring coordinates of respective center points of at least two basic units contained in the target grid unit based on the position information of the target grid unit;
acquiring distances from the central points of the at least two basic units contained in the target grid unit to the exploration source respectively based on the coordinates of the central points of the at least two basic units contained in the target grid unit and the position information of the exploration source;
and locating the target basic unit from the at least two basic units contained in the target grid unit based on the distance between the central point of each of the at least two basic units contained in the target grid unit and the exploration source.
For example, taking fig. 9 as an example, after the grid unit 91 is divided into 4 triangular basic units according to two diagonal lines, the computer device may calculate the coordinates of the center point A, B, C, D of the 4 basic units in the virtual scene, based on the coordinates of the center point A, B, C, D in the virtual scene, and the coordinates of the search source in the virtual scene, that is, the distances from the search source to the center point A, B, C, D, respectively, where the basic unit of the center point corresponding to the minimum distance is the target basic unit where the search source is located.
After determining the target base unit, the computer device may determine, from the base units around the target base unit, each visible base unit corresponding to the search source based on the view distance and the position information of the target base unit.
Step 504, acquiring the visual range of the exploration source based on the visual range distance and the position information of the target basic unit.
In a possible implementation manner, the acquiring the visual range of the exploration source based on the visual range distance and the position information of the target base unit includes:
acquiring position information of each obstacle in the virtual scene;
and acquiring the visual range of the search source based on the visual range, the position information of the target basic unit and the position information of each obstacle.
Since there are usually certain obstacles in the virtual scene, which may block the visible area of the search source, so that the search source may not be able to search all areas within the distance to the surrounding visual field, in this embodiment of the application, after determining the target base unit, the computer device may further determine the visible range of the search source by combining the visual field distance of the search source, the position information of the target base unit, and the position information of the obstacles.
The visual range of the search source may be an area range in which the distance from the search source in the virtual scene is not greater than the visual range and no obstacle exists between the search source and the virtual scene.
Step 505, traversing each basic unit in the visual range, and determining the visual basic unit.
In one possible implementation, after the computer device determines the visual range of the exploration source, it may traverse the basic units in the virtual scene to determine the basic units within the visual range.
The computer device determines whether a basic unit is in a visual range of an exploration source, acquires coordinates of a center point of the basic unit, determines whether the coordinates of the center point of the basic unit are in the visual range of the exploration source, determines that the basic unit is the visual basic unit in the visual range if the coordinates of the center point of the basic unit are in the visual range of the exploration source, and otherwise, determines that the basic unit is not the visual basic unit in the visual range.
At step 506, a state assignment is set for the visual base unit, the state assignment decreasing with time.
In this embodiment, when a base unit is determined to be a visual base unit, the computer device may set a status assignment for the base unit that decreases with time to indicate that the base unit is in a visual state for the marketing of the exploration source, and correspondingly, when the status assignment decreases to 0, the computer device may indicate that the base unit exits the visual state for the marketing of the exploration source.
In one possible implementation, setting state assignments for visual base units includes:
setting the state assignment of the visual basic unit as an initial value in response to the current state assignment of the visual basic unit being 0;
and resetting the state assignment of the visual base unit to the initial value in response to the current state assignment of the visual base unit not being 0.
In this embodiment of the application, the computer device may periodically execute the above steps 501 to 506, and in each execution process, when the current state assignment of the currently determined visual basic unit is 0, it indicates that the visual basic unit is in an invisible state before the current time, and directly sets the state assignment thereof as an initial value; and when the current state assignment of the visual basic unit determined at this time is not 0, indicating that the visual basic unit is in a visual state before the current time, and refreshing the duration of the visual state of the visual basic unit.
Step 507, generating a scene picture of the virtual scene based on the visual basic unit; in the scene picture, a designated virtual object in the visual basic unit is in a visible state, and the designated virtual object is a virtual object which does not belong to the same camp as the exploration source.
In the embodiment of the present application, in response to the state assignment of the visual base unit not being decremented to 0, a scene picture of the virtual scene is generated based on the visual base unit.
That is, when the state assignment of the visual base unit is not decremented to 0, the specified virtual object in the visual base unit in the scene picture of the generated virtual scene is in a visible state.
Correspondingly, if the state assignment of the subsequent visual base unit is decreased to 0, the visual base unit exits the visual state and is converted into the non-visual base unit, and in the scene picture of the subsequently generated virtual scene, the designated virtual object in the non-visual base unit is in the invisible state.
In the solution shown in the embodiment of the present application, generating a scene picture of the virtual scene based on the visual base unit includes:
generating a global view map; the global view map is used for indicating whether each basic unit in the virtual scene is in a visible state; and the visual basic unit is in a visual state in the global view map;
and generating a scene picture of the virtual scene based on the global view map.
In the embodiment of the application, when the computer device generates a scene picture, the scene picture may be generated from a global perspective of a virtual scene, where a plurality of search sources generally exist in the virtual scene corresponding to a single battle, and when the scene picture is generated, the computer device merges visual base units corresponding to the plurality of search sources to obtain a global visual base unit, generates a global view map based on the global visual base unit to indicate which base units in the virtual scene are lit, and then generates the scene picture based on the global view map.
Through the steps of the embodiment of the application, the computer equipment can provide the edge display effect of the high-precision visual field area. Please refer to fig. 12 to 14, which illustrate scene image diagrams according to embodiments of the present application. As shown in fig. 12 to 14, the virtual grass in the virtual scene is located outside the visual field area of other formations outside the virtual object 1201, as shown in fig. 12, when the virtual object 1201 is located outside the edge position of the virtual grass, the virtual object 1201 is visible to users of other formations, and as shown in fig. 13 and 14, when the virtual object 1201 comes within the edge position of the virtual grass, the virtual object 1201 is not visible to users of other formations.
And step 508, determining grid cells corresponding to the fog-masking effect in the virtual scene based on the global view map.
In the embodiment of the application, the region invisible to the user in the virtual scene can render the fog-masking effect, so that the reality of the display of the invisible region is improved. Generally speaking, the edge position accuracy of the fog-breaking effect has a small influence on the display effect of the scene picture of the virtual scene, and in order to improve the rendering efficiency of the fog-breaking effect, in this embodiment of the application, the computer device may perform rendering of the fog-breaking effect in a unit of a grid cell based on the global view map.
In one possible implementation, the determining, based on the global view map, a grid cell corresponding to a fog-masking effect in the virtual scene includes:
acquiring state information of at least two basic units in the first grid unit based on the global view map, wherein the state information is used for indicating whether the corresponding basic units are in a visible state or not; the first grid cell is any one grid cell in the virtual scene;
and determining the first grid cell as the grid cell corresponding to the fog-masking effect in response to the number of the base cells which are not in the visible state in the first grid cell reaching a number threshold.
In the embodiment of the present application, when the number of the base units that are not in the visible state in one grid unit reaches the number threshold, the computer device may confirm that the grid unit is the corresponding grid unit with the fog effect.
For example, when the number threshold is 1, the computer device may determine that one grid cell includes one or more base cells that are not in a visible state, and that the grid cell is a grid cell with a fog effect.
The number threshold may be set by a developer according to the division condition of the basic unit, or may also be set according to configuration operation of a user. Wherein, the configuration operation may be a configuration operation of the picture quality by the user.
Step 509, rendering the fog-disturbing effect in the scene picture of the virtual scene based on the grid cell corresponding to the fog-disturbing effect in the virtual scene.
The computer device can render the fog pattern effect in the scene picture of the virtual scene at the grid unit corresponding to the fog pattern effect, and then display the fog pattern effect in the display interface of the virtual scene.
In the embodiment of the present application, the target base unit is located by the target grid unit, which is actually upsampling (improving accuracy) the target grid unit, and the process of determining the grid unit corresponding to the fog-disturbing effect in the virtual scene based on the global view map is actually downsampling (reducing accuracy) the base unit, that is, in the embodiment of the present application, the computer device may flexibly use the upsampled base unit (such as a triangle) and the downsampled grid unit (such as a square) to balance the accuracy and the operation speed in the view and fog-disturbing calculation.
Please refer to fig. 15, which illustrates a view and a fog calculation process according to an embodiment of the present application. As shown in fig. 15, taking as an example that a square grid cell is divided into a plurality of triangular base cells, the process may include the following steps:
s1501, first, search sources, such as hero, soldier, and bullet with search function, are selected by traversing the characters on the field.
S1502, converting scene coordinates of the exploration source to a basic triangle index, and then acquiring all triangle areas in the maximum view range according to configuration.
S1503, the visual basic triangular mesh (i.e. the visual range) of the final search source in the mesh can be obtained by overlapping the maximum visual range with the pre-computed barrier mask of the corresponding basic triangle.
S1504, traversing the visual basic triangular mesh to carry out lighting assignment, wherein the assigned value is equivalent to the lighting duration.
In step S1505, the illuminated grid values are attenuated in each main cycle, and the unlit state is returned when the grid values are 0.
S1506, obtaining an overall global view map, which may use a downsampling scheme to generate an original map for rendering the fog with grid cells as units to reduce precision and improve operation rate, where the obtained original map for rendering the fog is shown in fig. 16, where a white region 1601 is a non-fog-obscuring rendering region, and the remaining black regions are fog-obscuring rendering regions.
To sum up, in the solution shown in the embodiment of the present application, a virtual scene is divided in advance according to grid cells, and then the grid cells are further divided into at least two basic cells, when a visual field range in the virtual scene is determined, the grid cell where an exploration source is located is determined first, then the basic cell where the exploration source is located is determined based on the grid cell where the exploration source is located, and the visual basic cell of the exploration source is determined based on the basic cell where the exploration source is located, and a scene picture of the virtual scene is generated based on the visual basic cell of the exploration source, because the basic cell has a smaller size than the grid cell, the accuracy of the edge position of the visual field area can be ensured, and meanwhile, because a two-stage positioning manner is adopted when the basic cell where the exploration source is located in the process of calculating the visual field area, that is, the grid cell is located first, and then, further positioning the basic unit in the grid unit, and in the process, compared with a scheme of directly positioning the basic unit where the search source is located from all the basic units, greatly simplifying the complexity of positioning and further simplifying the complexity of calculating the visual field area.
Fig. 17 is a block diagram of a picture generation apparatus for a virtual scene according to an exemplary embodiment of the present application, which may be used to perform all or part of the steps in the method shown in fig. 1 or fig. 5.
As shown in fig. 17, the apparatus includes:
a location information obtaining module 1701, configured to obtain location information of an exploration source in a virtual scene; the exploration source is a virtual object with a corresponding view distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each grid cell is divided into at least two basic cells; at least one edge is arranged in the basic unit and is not parallel to each edge in the grid unit;
a grid cell positioning module 1702, configured to position a target grid cell from a plurality of grid cells based on the position information of the exploration source; the target grid unit is the grid unit where the exploration source is located;
a target basic unit positioning module 1703, configured to position a target basic unit from at least two basic units included in the target grid unit based on the location information of the target grid unit and the location information of the exploration source; the target basic unit is a basic unit where the exploration source is located;
a basic unit determining module 1704, configured to determine, based on the distance of field of view and the position information of the target basic unit, each visual basic unit corresponding to the exploration source from basic units around the target basic unit;
a scene generating module 1705, configured to generate a scene picture of the virtual scene based on the visual base unit; in the scene picture, a designated virtual object in the visual basic unit is in a visible state, and the designated virtual object is a virtual object which does not belong to the same camp as the exploration source.
In one possible implementation, in response to the grid cell being a square grid, the base cell is a triangular grid divided by two diagonals of the grid cell,
the target base unit location module 1703 is configured to,
acquiring the distances from the exploration source to four edges of the target grid unit based on the position information of the target grid unit and the position information of the exploration source;
locating the target base unit from at least two base units included in the target grid unit based on distances from the exploration source to four edges of the target grid unit.
In a possible implementation manner, when the target basic unit is located from at least two basic units included in the target grid cell based on the distances from the exploration source to the four edges of the target grid cell, the target basic unit locating module 1703 is configured to,
comparing the distances from the exploration source to the four edges of the target grid unit in pairs to obtain the magnitude relation between the distances from the exploration source to the four edges of the target grid unit;
based on the size relationship, locating the target base unit from at least two base units included in the target grid unit.
In one possible implementation, in response to the grid cell being a square grid, the base cell being a triangular grid divided by a connecting line between a center point of the square and each side of the square, the target base cell location module 1703 is configured to,
acquiring the coordinates of the central point of the target grid unit based on the position information of the target grid unit;
acquiring a connection line included angle based on the central point coordinate of the target grid unit and the position information of the exploration source; the line included angle is an included angle between a connecting line between the central point of the target grid unit and the exploration source and a reference line;
and positioning a target basic unit from at least two basic units contained in the target grid unit based on the line included angle.
In one possible implementation, in response to the grid cell being a square grid, the base cell being a triangular grid divided by a connecting line between a center point of the square and each side of the square, the target base cell location module 1703 is configured to,
acquiring coordinates of respective center points of at least two basic units contained in the target grid unit based on the position information of the target grid unit;
acquiring distances from the central points of the at least two basic units contained in the target grid unit to the exploration source respectively based on the coordinates of the central points of the at least two basic units contained in the target grid unit and the position information of the exploration source;
and locating a target basic unit from the at least two basic units contained in the target grid unit based on the distance between the central point of each of the at least two basic units contained in the target grid unit and the exploration source.
In one possible implementation, the base unit determining module 1704 is configured to,
acquiring a visual range of the exploration source based on the visual range distance and the position information of the target basic unit;
and traversing each basic unit in the visual range to determine the visual basic unit.
In one possible implementation, the base unit determining module 1704 is configured to obtain the visual range of the exploration source based on the visual range distance and the position information of the target base unit,
acquiring position information of each obstacle in the virtual scene;
and acquiring the visual range of the exploration source based on the visual range distance, the position information of the target basic unit and the position information of each obstacle.
In one possible implementation, the apparatus further includes:
an assignment module configured to set a state assignment for the visual base unit before the image generation module 1705 generates the scene image of the virtual scene based on the visual base unit, where the state assignment decreases with time;
the picture generating module 1705 is configured to, in response to that the state assignment of the visual base unit is not decremented to 0, generate a scene picture of the virtual scene based on the visual base unit.
In one possible implementation, the assignment module is configured to,
setting the state assignment of the visual basic unit as an initial value in response to the current state assignment of the visual basic unit being 0;
resetting the state assignment of the visual base unit to the initial value in response to the current state assignment of the visual base unit not being 0.
In one possible implementation, the screen generating module 1705 is configured to,
generating a global view map; the global view map is used for indicating whether each basic unit in the virtual scene is in a visible state or not; and the visual base unit is in a visual state in the global view map;
and generating a scene picture of the virtual scene based on the global view map.
In one possible implementation, the apparatus further includes:
the grid unit determining module is used for determining grid units corresponding to the fog-masking effect in the virtual scene based on the global visual field map;
and the fog-mixing rendering module is used for rendering the fog-mixing effect in the scene picture of the virtual scene based on the grid unit corresponding to the fog-mixing effect in the virtual scene.
In one possible implementation, the grid cell determination module is configured to,
acquiring state information of at least two basic units in the first grid unit based on the global view map, wherein the state information is used for indicating whether the corresponding basic units are in a visible state or not; the first grid cell is any one grid cell in the virtual scene;
in response to the number of base cells in the first grid cell that are not in a visible state reaching a number threshold, determining the first grid cell as a grid cell corresponding to a fog-shedding effect.
To sum up, in the solution shown in the embodiment of the present application, a virtual scene is divided in advance according to grid cells, and then the grid cells are further divided into at least two basic cells, when a visual field range in the virtual scene is determined, the grid cell where an exploration source is located is determined first, then the basic cell where the exploration source is located is determined based on the grid cell where the exploration source is located, and the visual basic cell of the exploration source is determined based on the basic cell where the exploration source is located, and a scene picture of the virtual scene is generated based on the visual basic cell of the exploration source, because the basic cell has a smaller size than the grid cell, the accuracy of the edge position of the visual field area can be ensured, and meanwhile, because a two-stage positioning manner is adopted when the basic cell where the exploration source is located in the process of calculating the visual field area, that is, the grid cell is located first, and then, further positioning the basic unit in the grid unit, and in the process, compared with a scheme of directly positioning the basic unit where the search source is located from all the basic units, greatly simplifying the complexity of positioning and further simplifying the complexity of calculating the visual field area.
Fig. 18 shows a block diagram of a computer device 1800, provided in an example embodiment of the present application. The computer device 1800 may be a terminal, such as: smart phones, tablet computers, MP3 players (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), notebook computers, desktop computers, and the like.
Generally, computer device 1800 includes: a processor 1801 and a memory 1802.
The processor 1801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array).
Memory 1802 may include one or more computer-readable storage media, which may be non-transitory. Memory 1802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1802 is used to store at least one computer program/instruction for execution by the processor 1801 to implement the methods provided by the method embodiments herein.
In some embodiments, computer device 1800 may also optionally include: a peripheral interface 1803 and at least one peripheral. The processor 1801, memory 1802, and peripheral interface 1803 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1804, display 1805, camera assembly 1806, audio circuitry 1807, positioning assembly 1808, and power supply 1809.
The peripheral interface 1803 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1801 and the memory 1802.
The Radio Frequency circuit 1804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1804 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
The display screen 1805 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1805 is a touch display screen, the display screen 1805 also has the ability to capture touch signals on or over the surface of the display screen 1805. The touch signal may be input to the processor 1801 as a control signal for processing. At this point, the display 1805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard.
The camera assembly 1806 is used to capture images or video. Optionally, the camera assembly 1806 includes a front camera and a rear camera.
The audio circuitry 1807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1801 for processing or inputting the electric signals to the radio frequency circuit 1804 to achieve voice communication. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. In some embodiments, audio circuitry 1807 may also include a headphone jack.
The Location component 1808 is used to locate a current geographic Location of the computer device 1800 for navigation or LBS (Location Based Service). The Positioning component 1808 may be a Positioning component based on a Global Positioning System (GPS) in the united states, a beidou System in china, or a galileo System in europe.
The power supply 1809 is used to power various components within the computer device 1800. The power supply 1809 may be ac, dc, disposable or rechargeable.
In some embodiments, computer device 1800 also includes one or more sensors 1810. The one or more sensors 1810 include, but are not limited to: acceleration sensor 1811, gyro sensor 1812, pressure sensor 1813, fingerprint sensor 1814, optical sensor 1815, and proximity sensor 1816.
Those skilled in the art will appreciate that the configuration illustrated in FIG. 18 is not intended to be limiting with respect to the computer device 1800 and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components may be employed.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer readable storage medium has stored therein at least one computer program that is loaded and executed by a processor to implement the method according to the above-described embodiments of the present application.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
In an exemplary embodiment, a computer program product or computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method described in the above embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the aspects disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a scope of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements/instrumentalities shown in the drawings and described above, and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited by the appended claims.

Claims (15)

1. A method for generating a scene of a virtual scene, the method comprising:
acquiring position information of an exploration source in a virtual scene; the exploration source is a virtual object with a corresponding view distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each grid cell is divided into at least two basic cells;
locating a target grid cell from a plurality of the grid cells based on location information of the exploration source; the target grid unit is the grid unit where the exploration source is located;
locating a target basic unit from at least two basic units contained in the target grid unit based on the position information of the target grid unit and the position information of the exploration source; the target basic unit is a basic unit where the exploration source is located;
determining each visual basic unit corresponding to the exploration source from basic units around the target basic unit based on the visual field distance and the position information of the target basic unit;
generating a scene picture of the virtual scene based on the visual base unit; in the scene picture, a designated virtual object in the visual basic unit is in a visible state, and the designated virtual object is a virtual object which does not belong to the same camp as the exploration source.
2. The method of claim 1, wherein in response to the grid cell being a square grid, the base cell is a triangular grid divided by two diagonals of the grid cell,
the locating a target basic unit from at least two basic units included in the target grid unit based on the position information of the target grid unit and the position information of the exploration source comprises:
acquiring the distances from the exploration source to four edges of the target grid unit based on the position information of the target grid unit and the position information of the exploration source;
locating the target base unit from at least two base units included in the target grid unit based on distances from the exploration source to four edges of the target grid unit.
3. The method of claim 2, wherein locating the target base unit from at least two base units included in the target grid cell based on distances from the exploration source to four sides of the target grid cell comprises:
comparing the distances from the exploration source to the four edges of the target grid unit in pairs to obtain the magnitude relation between the distances from the exploration source to the four edges of the target grid unit;
based on the size relationship, locating the target base unit from at least two base units included in the target grid unit.
4. The method of claim 1, wherein in response to the grid cell being a square grid, the base cell is a triangular grid drawn from a center point of the square and a line connecting the center point to each side of the square,
the locating a target basic unit from at least two basic units included in the target grid unit based on the position information of the target grid unit and the position information of the exploration source comprises:
acquiring the coordinates of the central point of the target grid unit based on the position information of the target grid unit;
acquiring a connection line included angle based on the central point coordinate of the target grid unit and the position information of the exploration source; the line included angle is an included angle between a connecting line between the central point of the target grid unit and the exploration source and a reference line;
and positioning a target basic unit from at least two basic units contained in the target grid unit based on the line included angle.
5. The method of claim 1, wherein in response to the grid cell being a square grid, the base cell is a triangular grid drawn from a center point of the square and a line connecting the center point to each side of the square,
the locating a target basic unit from at least two basic units included in the target grid unit based on the position information of the target grid unit and the position information of the exploration source comprises:
acquiring coordinates of respective center points of at least two basic units contained in the target grid unit based on the position information of the target grid unit;
acquiring distances from the central points of the at least two basic units contained in the target grid unit to the exploration source respectively based on the coordinates of the central points of the at least two basic units contained in the target grid unit and the position information of the exploration source;
and locating a target basic unit from the at least two basic units contained in the target grid unit based on the distance between the central point of each of the at least two basic units contained in the target grid unit and the exploration source.
6. The method according to claim 1, wherein the determining, from the basic units around the target basic unit, each visual basic unit corresponding to the exploration source based on the visual field distance and the position information of the target basic unit comprises:
acquiring a visual range of the exploration source based on the visual range distance and the position information of the target basic unit;
and traversing each basic unit in the visual range to determine the visual basic unit.
7. The method of claim 6, wherein the obtaining the visual range of the exploration source based on the visual range distance and the position information of the target base unit comprises:
acquiring position information of each obstacle in the virtual scene;
and acquiring the visual range of the exploration source based on the visual range distance, the position information of the target basic unit and the position information of each obstacle.
8. The method of claim 1, wherein prior to generating a scene view of the virtual scene based on the visual base unit, further comprising:
setting a state assignment for the visual base unit, the state assignment decreasing with time;
the generating a scene picture of the virtual scene based on the visual base unit comprises:
generating a scene screen of the virtual scene based on the visual base unit in response to the state assignment of the visual base unit not being decremented to 0.
9. The method of claim 8, wherein setting a status assignment for the visual base unit comprises:
setting the state assignment of the visual basic unit as an initial value in response to the current state assignment of the visual basic unit being 0;
resetting the state assignment of the visual base unit to the initial value in response to the current state assignment of the visual base unit not being 0.
10. The method of claim 1, wherein generating a scene picture of the virtual scene based on the visual base unit comprises:
generating a global view map; the global view map is used for indicating whether each basic unit in the virtual scene is in a visible state or not; and the visual base unit is in a visual state in the global view map;
and generating a scene picture of the virtual scene based on the global view map.
11. The method of claim 10, further comprising:
determining grid cells corresponding to the fog-masking effect in the virtual scene based on the global view map;
and rendering the fog-masking effect in the scene picture of the virtual scene based on the grid unit corresponding to the fog-masking effect in the virtual scene.
12. The method of claim 11, wherein determining grid cells in the virtual scene corresponding to the fog-breaking effect based on the global view map comprises:
acquiring state information of at least two basic units in the first grid unit based on the global view map, wherein the state information is used for indicating whether the corresponding basic units are in a visible state or not; the first grid cell is any one grid cell in the virtual scene;
in response to the number of base cells in the first grid cell that are not in a visible state reaching a number threshold, determining the first grid cell as a grid cell corresponding to a fog-shedding effect.
13. A picture generation apparatus for a virtual scene, the apparatus comprising:
the position information acquisition module is used for acquiring the position information of the exploration source in the virtual scene; the exploration source is a virtual object with a corresponding view distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each grid cell is divided into at least two basic cells;
a grid cell positioning module for positioning a target grid cell from the plurality of grid cells based on the position information of the exploration source; the target grid unit is the grid unit where the exploration source is located;
a target basic unit positioning module, configured to position a target basic unit from at least two basic units included in the target grid unit based on the location information of the target grid unit and the location information of the exploration source; the target basic unit is a basic unit where the exploration source is located;
a basic unit determining module, configured to determine, based on the distance of field of view and the position information of the target basic unit, each visual basic unit corresponding to the exploration source from basic units around the target basic unit;
a picture generation module for generating a scene picture of the virtual scene based on the visual base unit; in the scene picture, a designated virtual object in the visual basic unit is in a visible state, and the designated virtual object is a virtual object which does not belong to the same camp as the exploration source.
14. A computer device, characterized in that it comprises a processor and a memory in which at least one computer program is stored, which is loaded and executed by the processor to implement the picture generation method of a virtual scene according to any one of claims 1 to 12.
15. A computer-readable storage medium, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to implement the picture generation method of a virtual scene according to any one of claims 1 to 12.
CN202110750124.1A 2021-07-02 2021-07-02 Picture generation method and device of virtual scene, computer equipment and storage medium Active CN113426131B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110750124.1A CN113426131B (en) 2021-07-02 2021-07-02 Picture generation method and device of virtual scene, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110750124.1A CN113426131B (en) 2021-07-02 2021-07-02 Picture generation method and device of virtual scene, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113426131A true CN113426131A (en) 2021-09-24
CN113426131B CN113426131B (en) 2023-06-30

Family

ID=77758805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110750124.1A Active CN113426131B (en) 2021-07-02 2021-07-02 Picture generation method and device of virtual scene, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113426131B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000149059A (en) * 1996-07-25 2000-05-30 Sega Enterp Ltd Device and method for processing images, game system and vehicle play machine
WO2000042576A2 (en) * 1999-01-12 2000-07-20 Schlumberger Limited Scalable visualization for interactive geometry modeling
CN105957002A (en) * 2016-04-20 2016-09-21 山东大学 Image interpolation enlargement method and device based on triangular grid
CN106446351A (en) * 2016-08-31 2017-02-22 郑州捷安高科股份有限公司 Real-time drawing-oriented large-scale scene organization and scheduling technology and simulation system
US20180116726A1 (en) * 2016-10-31 2018-05-03 Edda Technology, Inc. Method and system for interactive grid placement and measurements for lesion removal
CN108619721A (en) * 2018-04-27 2018-10-09 腾讯科技(深圳)有限公司 Range information display methods, device and computer equipment in virtual scene
CN109523621A (en) * 2018-11-15 2019-03-26 腾讯科技(深圳)有限公司 Loading method and device, storage medium, the electronic device of object
CN109925715A (en) * 2019-01-29 2019-06-25 腾讯科技(深圳)有限公司 A kind of virtual waters generation method, device and terminal
CN110136262A (en) * 2019-05-17 2019-08-16 中科三清科技有限公司 Water body virtual visualization method and apparatus
CN110755845A (en) * 2019-10-21 2020-02-07 腾讯科技(深圳)有限公司 Virtual world picture display method, device, equipment and medium
CN110812844A (en) * 2019-11-06 2020-02-21 网易(杭州)网络有限公司 Path finding method in game, terminal and readable storage medium
CN111932683A (en) * 2020-08-06 2020-11-13 北京理工大学 Semantic-driven virtual pet behavior generation method under mixed reality scene
CN112245926A (en) * 2020-11-16 2021-01-22 腾讯科技(深圳)有限公司 Virtual terrain rendering method, device, equipment and medium
CN112494941A (en) * 2020-12-14 2021-03-16 网易(杭州)网络有限公司 Display control method and device of virtual object, storage medium and electronic equipment
CN112569602A (en) * 2020-12-25 2021-03-30 珠海金山网络游戏科技有限公司 Method and device for constructing terrain in virtual scene

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000149059A (en) * 1996-07-25 2000-05-30 Sega Enterp Ltd Device and method for processing images, game system and vehicle play machine
WO2000042576A2 (en) * 1999-01-12 2000-07-20 Schlumberger Limited Scalable visualization for interactive geometry modeling
CN105957002A (en) * 2016-04-20 2016-09-21 山东大学 Image interpolation enlargement method and device based on triangular grid
CN106446351A (en) * 2016-08-31 2017-02-22 郑州捷安高科股份有限公司 Real-time drawing-oriented large-scale scene organization and scheduling technology and simulation system
US20180116726A1 (en) * 2016-10-31 2018-05-03 Edda Technology, Inc. Method and system for interactive grid placement and measurements for lesion removal
CN108619721A (en) * 2018-04-27 2018-10-09 腾讯科技(深圳)有限公司 Range information display methods, device and computer equipment in virtual scene
CN109523621A (en) * 2018-11-15 2019-03-26 腾讯科技(深圳)有限公司 Loading method and device, storage medium, the electronic device of object
CN109925715A (en) * 2019-01-29 2019-06-25 腾讯科技(深圳)有限公司 A kind of virtual waters generation method, device and terminal
CN110136262A (en) * 2019-05-17 2019-08-16 中科三清科技有限公司 Water body virtual visualization method and apparatus
CN110755845A (en) * 2019-10-21 2020-02-07 腾讯科技(深圳)有限公司 Virtual world picture display method, device, equipment and medium
CN110812844A (en) * 2019-11-06 2020-02-21 网易(杭州)网络有限公司 Path finding method in game, terminal and readable storage medium
CN111932683A (en) * 2020-08-06 2020-11-13 北京理工大学 Semantic-driven virtual pet behavior generation method under mixed reality scene
CN112245926A (en) * 2020-11-16 2021-01-22 腾讯科技(深圳)有限公司 Virtual terrain rendering method, device, equipment and medium
CN112494941A (en) * 2020-12-14 2021-03-16 网易(杭州)网络有限公司 Display control method and device of virtual object, storage medium and electronic equipment
CN112569602A (en) * 2020-12-25 2021-03-30 珠海金山网络游戏科技有限公司 Method and device for constructing terrain in virtual scene

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LENGJIAYI: "一些基本数字图像处理算法", Retrieved from the Internet <URL:https://blog.csdn.net/lengjiayi/article/details/84975872?ops_request_misc=&request_id=&biz_id=102&utm_term=%E4%B8%89%E8%A7%92%E7%BD%91%E6%A0%BC%20%E4%BA%8C%E6%AC%A1%E5%AE%9A%E4%BD%8D%20%E5%9B%BE%E5%83%8F%E8%BE%B9%E7%BC%98%20%E6%B8%B2%E6%9F%93&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduweb~default-3-84975872.142^v88^control_2,239^v2^insert_chatgpt&spm=1018.2226.3001.4187> *
ZIQI WANG. ET AL: "Machined sharp edge restoration for triangle mesh workpiece models derived from grid-based machining simulation", COMPUTER-AIDED DESIGN AND APPLICATIONS, vol. 15, no. 6, pages 905 - 915 *
吴勇;姚凌;童为民;: "基于可视范围的图像检索方法", 地球信息科学学报, no. 08, pages 24 - 30 *

Also Published As

Publication number Publication date
CN113426131B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
US11256384B2 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
CN102695032B (en) Information processor, information sharing method and terminal device
US20200316472A1 (en) Method for displaying information in a virtual environment
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
EP3832605B1 (en) Method and device for determining potentially visible set, apparatus, and storage medium
CN112245926B (en) Virtual terrain rendering method, device, equipment and medium
CN110827391B (en) Image rendering method, device and equipment and storage medium
US20230082928A1 (en) Virtual aiming control
CN103959340A (en) Graphics rendering technique for autostereoscopic three dimensional display
JP2024509064A (en) Location mark display method, device, equipment and computer program
US20220291791A1 (en) Method and apparatus for determining selected target, device, and storage medium
US20220032188A1 (en) Method for selecting virtual objects, apparatus, terminal and storage medium
CN110889384A (en) Scene switching method and device, electronic equipment and storage medium
CN113724309A (en) Image generation method, device, equipment and storage medium
CN113426131B (en) Picture generation method and device of virtual scene, computer equipment and storage medium
CN113018865B (en) Climbing line generation method and device, computer equipment and storage medium
CN113797531A (en) Method and device for realizing occlusion rejection, computer equipment and storage medium
CN114241096A (en) Three-dimensional model generation method, device, equipment and storage medium
CN116828207A (en) Image processing method, device, computer equipment and storage medium
CN112717393A (en) Virtual object display method, device, equipment and storage medium in virtual scene
CN113762054A (en) Image recognition method, device, equipment and readable storage medium
CN113058266B (en) Method, device, equipment and medium for displaying scene fonts in virtual environment
CN116993946A (en) Model generation method, device, terminal and storage medium
CN111506680B (en) Terrain data generation and rendering method and device, medium, server and terminal
CN113205582B (en) Method, device, equipment and medium for generating and using baking paste chart

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40052344

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant