CN113426131B - Picture generation method and device of virtual scene, computer equipment and storage medium - Google Patents

Picture generation method and device of virtual scene, computer equipment and storage medium Download PDF

Info

Publication number
CN113426131B
CN113426131B CN202110750124.1A CN202110750124A CN113426131B CN 113426131 B CN113426131 B CN 113426131B CN 202110750124 A CN202110750124 A CN 202110750124A CN 113426131 B CN113426131 B CN 113426131B
Authority
CN
China
Prior art keywords
unit
grid
target
visual
base
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110750124.1A
Other languages
Chinese (zh)
Other versions
CN113426131A (en
Inventor
唐竟人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Chengdu Co Ltd
Original Assignee
Tencent Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Chengdu Co Ltd filed Critical Tencent Technology Chengdu Co Ltd
Priority to CN202110750124.1A priority Critical patent/CN113426131B/en
Publication of CN113426131A publication Critical patent/CN113426131A/en
Application granted granted Critical
Publication of CN113426131B publication Critical patent/CN113426131B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Abstract

The application discloses a picture generation method, device, computer equipment and storage medium of a virtual scene, and relates to the technical field of virtual scenes. The method comprises the following steps: acquiring position information of an exploration source in a virtual scene; locating a target grid cell from a plurality of the grid cells based on the location information of the exploration source; locating a target base unit from at least two base units contained in the target grid unit; determining each visual basic unit corresponding to the exploration source from basic units around the target basic unit; generating a scene picture of the virtual scene based on the visual basic unit; according to the scheme, the accuracy of the edge position of the visual field area can be improved while the calculation efficiency of the visual field area is ensured, and then the display effect of the game picture is improved.

Description

Picture generation method and device of virtual scene, computer equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of virtual scenes, in particular to a method, a device, computer equipment and a storage medium for generating pictures of virtual scenes.
Background
Multiplayer online tactical athletic (Multiplayer Online Battle Arena, MOBA) class games typically conduct presentation of game scene pictures based on a certain field of view mechanism.
The mechanism of view in MOBA-type games is typically implemented based on meshing the game scene. For example, a game developer divides a game scene into a plurality of square grid cells in advance, calculates grid cells in a certain range around a virtual unit of a camp where a user is located as a grid cell visible to the user in the game process, and displays virtual units of other camps in the grid cell visible to the user in a game picture.
However, in order to ensure the calculation efficiency of the field of view area, the density of the grid cells in the game scene is generally not high, which results in a larger size of each grid cell, and thus, the accuracy of the edge position of the field of view area is lower, and the display effect of the game picture is affected.
Disclosure of Invention
The embodiment of the application provides a picture generation method, device, computer equipment and storage medium of a virtual scene, which can improve the accuracy of the edge position of a visual field area while ensuring the calculation efficiency of the visual field area, so as to improve the display effect of a game picture. The technical scheme is as follows:
In one aspect, a method for generating a picture of a virtual scene is provided, the method comprising:
acquiring position information of an exploration source in a virtual scene; the search source is a virtual object with a corresponding visual field distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each grid cell is divided into at least two base cells;
locating a target grid cell from a plurality of the grid cells based on the location information of the exploration source; the target grid cell is the grid cell where the exploration source is located;
locating a target base unit from at least two base units contained in the target grid unit based on the position information of the target grid unit and the position information of the exploration source; the target base unit is the base unit where the exploration source is located;
determining each visual basic unit corresponding to the exploration source from basic units around the target basic unit based on the visual field distance and the position information of the target basic unit;
generating a scene picture of the virtual scene based on the visual basic unit; in the scene picture, the specified virtual object in the visual basic unit is in a visual state, and the specified virtual object is a virtual object which does not belong to the same camp with the exploration source.
In another aspect, there is provided a picture generation apparatus of a virtual scene, the apparatus including:
the position information acquisition module is used for acquiring the position information of the exploration source in the virtual scene; the search source is a virtual object with a corresponding visual field distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each grid cell is divided into at least two base cells;
a grid cell positioning module for positioning a target grid cell from among the plurality of grid cells based on the position information of the exploration source; the target grid cell is the grid cell where the exploration source is located;
a target base unit positioning module, configured to position a target base unit from at least two base units included in the target grid unit based on the position information of the target grid unit and the position information of the exploration source; the target base unit is the base unit where the exploration source is located;
a base unit determining module, configured to determine, from base units around the target base unit, each visual base unit corresponding to the search source based on the field of view distance and the position information of the target base unit;
The picture generation module is used for generating a scene picture of the virtual scene based on the visual basic unit; in the scene picture, the specified virtual object in the visual basic unit is in a visual state, and the specified virtual object is a virtual object which does not belong to the same camp with the exploration source.
In one possible implementation, in response to the grid cell being a square grid, the base cell is a triangular grid divided by two diagonals of the grid cell,
the target base unit positioning module is used for positioning the target base unit,
acquiring distances from the exploration source to four sides of the target grid unit based on the position information of the target grid unit and the position information of the exploration source;
the target base unit is located from at least two base units contained in the target grid unit based on distances from the exploration source to four sides of the target grid unit.
In one possible implementation, a target base unit locating module, when locating the target base unit from at least two base units contained in the target grid unit based on distances from the exploration source to four sides of the target grid unit, is configured to,
Comparing the distances from the exploration source to the four sides of the target grid unit in pairs to obtain the size relation between the distances from the exploration source to the four sides of the target grid unit;
based on the size relationship, the target base unit is located from at least two base units contained in the target grid unit.
In one possible implementation, in response to the grid cell being a square grid, the base cell is a triangular grid divided by lines between a center point of the square and each side of the square, the target base cell positioning module is configured to,
acquiring center point coordinates of the target grid unit based on the position information of the target grid unit;
acquiring a connecting line included angle based on the central point coordinates of the target grid unit and the position information of the exploration source; the connecting line included angle is an included angle between a connecting line between the central point of the target grid unit and the exploration source and a reference line;
and positioning a target basic unit from at least two basic units contained in the target grid unit based on the connecting line included angle.
In one possible implementation, in response to the grid cell being a square grid, the base cell is a triangular grid divided by lines between a center point of the square and each side of the square, the target base cell positioning module is configured to,
Acquiring coordinates of respective center points of at least two basic units contained in the target grid unit based on the position information of the target grid unit;
acquiring distances between the respective center points of at least two basic units contained in the target grid unit and the exploration source based on the coordinates of the respective center points of at least two basic units contained in the target grid unit and the position information of the exploration source;
a target base unit is located from the at least two base units contained in the target grid unit based on distances between respective center points of the at least two base units contained in the target grid unit and the exploration source.
In one possible implementation, the base unit determination module is configured to determine, based on the base unit determination module,
acquiring a visual range of the exploration source based on the visual field distance and the position information of the target basic unit;
traversing each basic unit in the visual range, and determining the visual basic unit.
In one possible implementation, the visual range of the exploration source is obtained based on the field of view distance and the location information of the target base unit, a base unit determination module for,
Acquiring position information of each obstacle in the virtual scene;
and acquiring the visual range of the search source based on the visual field distance, the position information of the target base unit and the position information of each obstacle.
In one possible implementation, the apparatus further includes:
the assignment module is used for setting state assignment for the visual basic unit before the picture generation module generates a scene picture of the virtual scene based on the visual basic unit, and the state assignment is decreased along with time;
and the picture generation module is used for generating a scene picture of the virtual scene based on the visual basic unit in response to the state assignment of the visual basic unit not being decremented to 0.
In one possible implementation, the assignment module is configured to,
setting the state assignment of the visual basic unit to an initial value in response to the current state assignment of the visual basic unit being 0;
and resetting the state assignment of the visual basic unit to the initial value in response to the current state assignment of the visual basic unit not being 0.
In one possible implementation, the picture generation module is configured to, in response to a request from a user,
Generating a global view map; the global view map is used for indicating whether each basic unit in the virtual scene is in a visual state or not; and the visual basic unit is in a visual state in the global view map;
and generating a scene picture of the virtual scene based on the global view map.
In one possible implementation, the apparatus further includes:
the grid cell determining module is used for determining grid cells corresponding to the foggy effect in the virtual scene based on the global view map;
and the foggy rendering module is used for rendering the foggy effect in the scene picture of the virtual scene based on the grid cells corresponding to the foggy effect in the virtual scene.
In one possible implementation, the grid cell determination module is configured to determine, based on the grid cell determination module,
acquiring state information of at least two basic units in a first grid unit based on the global view map, wherein the state information is used for indicating whether the corresponding basic units are in a visual state or not; the first grid cell is any grid cell in the virtual scene;
in response to the number of base cells in the first grid cell that are not in a visible state reaching a number threshold, the first grid cell is determined to be a grid cell that corresponds to a misting effect.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where at least one computer program is stored in the memory, where the at least one computer program is loaded and executed by the processor to implement a method for generating a picture of a virtual scene as described in the above aspect.
In another aspect, embodiments of the present application provide a computer readable storage medium having at least one computer program stored therein, the at least one computer program being loaded and executed by a processor to implement a method for generating a picture of a virtual scene as described in the above aspect.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the picture generation method of the virtual scene described in the above aspect.
The beneficial effects that technical scheme that this application embodiment provided include at least:
The virtual scene is divided into at least two basic units according to grid units in advance, when the visual field range in the virtual scene is determined, the grid unit where the exploration source is located is determined firstly, then the basic unit where the exploration source is located is determined based on the grid unit where the exploration source is located, and the visual basic unit where the exploration source is located is determined based on the basic unit where the exploration source is located.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for generating a picture of a virtual scene according to an embodiment of the present application;
FIGS. 2-4 are schematic views of three fields of view according to embodiments of the present application;
FIG. 5 is a flowchart of a method for generating a virtual scene according to an embodiment of the present application;
fig. 6 to 8 are schematic views of three kinds of grid cell divisions involved in the embodiment shown in fig. 5;
FIG. 9 is a schematic diagram of grid cell coordinate relationships involved in the embodiment of FIG. 5;
FIG. 10 is a schematic diagram of exploration source locations involved in the embodiment of FIG. 5;
FIG. 11 is a schematic view of an angular division of the embodiment of FIG. 5;
FIGS. 12-14 are schematic views of images of a scene involved in the embodiment of FIG. 5;
FIG. 15 is a schematic view of the field of view and haze calculation process involved in the embodiment of FIG. 5;
FIG. 16 is an original view of the rendered mist involved in the embodiment of FIG. 5;
fig. 17 is a block diagram of a configuration of a screen generating apparatus for virtual scenes according to an embodiment of the present application;
fig. 18 is a block diagram of a computer device according to another embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be understood that references herein to "a number" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Referring to fig. 1, a flowchart of a method for generating a picture of a virtual scene according to an exemplary embodiment of the present application is shown. The method may be performed by a computer device in which an application program for generating and displaying a virtual scene is running, for example, the computer device may be a terminal running a virtual scene client, or the computer device may also be a background server corresponding to the virtual scene client running in the terminal. As shown in fig. 1, the method may include the steps of:
step 101, acquiring position information of an exploration source in a virtual scene; the search source is a virtual object with a corresponding visual field distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each of the grid cells is divided into at least two base cells.
The virtual scene refers to a virtual scene that an application program displays (or provides) while running on a terminal. The virtual scene can be a simulation environment scene of a real world, a half-simulation half-fictional three-dimensional environment scene, or a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are exemplified by the virtual scene being a three-dimensional virtual scene, but are not limited thereto. Optionally, the virtual scene is also used for virtual scene fight between at least two virtual characters. Optionally, the virtual scene has virtual resources available for use by at least two virtual roles. Optionally, the virtual scene includes that the virtual world includes a square/rectangular map, the square/rectangular map includes a symmetric lower left corner area and upper right corner area, and two virtual roles belonging to two hostile camps occupy one of the areas respectively, and target buildings/points/bases/crystals deep in the opposite area are destroyed to serve as winning targets.
The exploration source may be a virtual object (or referred to as a virtual object) belonging to the camping of the user in the virtual scene, for example, the exploration source may be a virtual character/virtual character controlled by the current user, a virtual character/virtual character controlled by other users/artificial intelligence (Artificial Intelligence, AI) in the camping of the current user, a virtual building of the camping of the current user, a virtual call object of the camping of the current user, a virtual prop (for example, a virtual sentry for exploring a surrounding view) placed in the virtual scene by the virtual character/virtual character in the camping of the current user, and so on. Optionally, the virtual calls include, but are not limited to, virtual calls triggered by a virtual character or character controlled by a user/AI (e.g., skills, virtual ammunition), virtual calls automatically generated in a virtual scene and belonging to a current user's camping (e.g., virtual soldiers), and the like.
The search sources have respective field of view distances in the virtual scene, that is to say, virtual objects of other campaigns lying within the field of view distance of the search sources are visible to the campaigns in which the current user is located without occlusion.
In the virtual scene, the visual field distances of different search sources may be the same or different. For example, virtual buildings and virtual objects typically have a large field of view distance (e.g., 200 units of distance), while virtual props typically have a small field of view distance (e.g., 150 units of distance). For another example, the same search source may have different viewing distances when located at different positions, for example, when a virtual object is located at a high position and a low position, respectively, the viewing distances may be different, for example, when a virtual object is located at a high position, the virtual object has a viewing distance of 200 units, and when it moves to a flat ground, the virtual object has a viewing distance of 150 units.
Step 102, positioning a target grid cell from a plurality of grid cells based on the position information of the exploration source; the target grid cell is the grid cell where the exploration source is located.
In the embodiment of the present application, the location information of the exploration source may be coordinates of the exploration source in the virtual scene.
Step 103, positioning a target basic unit from at least two basic units contained in the target grid unit based on the position information of the target grid unit and the position information of the exploration source; the target base unit is the base unit where the exploration source is located.
In the embodiment of the present application, the location information of the grid cell may be location information of a specified point (such as a lower left corner of a square) in the grid cell in the virtual scene. Based on the location information of the specified point, and the size information (e.g., side length) of the grid cell, location information of other locations in the grid cell may be determined. Alternatively, the position information of the grid cell may also include position information of each vertex of the grid cell in the virtual scene. Alternatively, the position information of the grid cell may also include position information of each side of the grid cell in the virtual scene, and so on. The embodiment of the application does not limit the form of the position information of the grid unit.
In the embodiment of the present application, the virtual scene is divided into a plurality of grid cells in advance, and further, each grid cell is further divided into two or more base cells in advance. When calculating the basic unit where the exploration source is located, firstly locating the target grid unit where the exploration source is located, and then determining the basic unit where the exploration source is located in the target grid unit.
Step 104, based on the field distance and the position information of the target base unit, each visual base unit corresponding to the search source is determined from the base units around the target base unit.
The visual basic unit refers to a basic unit which can be explored by an exploration source; alternatively, the virtual objects of the other campaigns located in the visual base unit are visible to the campaigns where the exploration source is located.
In this embodiment of the present application, after the target base unit where the search source is located, the visual base unit that can be searched by the search source may be determined from the base units around the target base unit and located within the visual field distance of the search source by combining the visual field distance of the search source itself.
Step 105, generating a scene picture of the virtual scene based on the visual basic unit; in the scene picture, the specified virtual object in the visual basic unit is in a visual state, and the specified virtual object is a virtual object which does not belong to the same camping as the search source.
In the embodiment of the application, each virtual object in the virtual scene is divided into at least two camps. When generating a scene picture of a virtual scene based on a visual basic unit, if virtual objects in other camps except the camps where the search source is located exist in the visual basic unit, the virtual objects in the other camps can be visible to each user in the camps where the search source is located, that is, in the generated scene picture, the virtual objects in the other camps exist. Optionally, when the virtual object in the other campaigns is located in another base unit than the visual base unit, the virtual object in the other campaigns is not visible to the respective users in the campaigns where the search source is located.
In one possible implementation, the specified virtual object may be a virtual object that is in a non-hidden state (e.g., is not currently hidden by a skill/prop). For example, if virtual objects in other camps exist in the visual basic unit and the virtual objects in the other camps are in a non-hidden state, the virtual objects in the other camps may be visible to each user in the camps where the search source is located; correspondingly, if the virtual object in the other camps is in a hidden state, even if the virtual object in the other camps is located in the visual basic unit, the virtual object in the other camps can be invisible to each user in the camps where the search source is located.
In one possible implementation, when the exploration source does not have a function of detecting a virtual object in a hidden state, the specified virtual object may be a virtual object in a non-hidden state. Correspondingly, if virtual objects in other camps exist in the visual basic unit and are in a hidden state, when the exploration source has the function of detecting the virtual objects in the hidden state, the virtual objects in the other camps are also visible to each user in the camps where the exploration source is located.
The virtual object that does not belong to the same camp as the search source may be a virtual object of a hostile camp of the search source or may be a virtual object of a neutral camp.
In summary, the scheme shown in the embodiment of the present application divides the virtual scene in advance according to the grid unit, then further divides the grid unit into at least two base units, when determining the view field range in the virtual scene, first determines the grid unit where the search source is located, then determines the base unit where the search source is located based on the grid unit where the search source is located, and determines the visual base unit of the search source based on the base unit where the search source is located, where the scene image of the virtual scene is generated based on the visual base unit of the search source.
Taking a game scenario in which the scheme is applied to a MOBA game as an example, please refer to fig. 2 to 4, which show schematic diagrams of three field of view areas according to an embodiment of the present application.
Assuming that the region in which the virtual grass is located should be located outside the field of view of the search source, the other regions outside the virtual grass should be located within the field of view of the search source. In fig. 2, when the computer device running the game directly uses the grid cell as the smallest visual field area calculating unit, the smallest unit is the grid cell, that is, in fig. 2, the invisible area (such as the filled area 21 in fig. 2) and the visible area (other than the invisible area in fig. 2) are respectively formed by a plurality of complete square grid cells. Because of the large size of the square grid cells, the irregular distribution of the positions of the virtual grass may result in a portion of the non-visible area (e.g., the area of grid cell 22 in fig. 2) actually containing a large portion of the non-virtual grass, and correspondingly, a portion of the visible area (e.g., the area of grid cell 23 in fig. 2) actually containing a portion of the virtual grass, resulting in poor accuracy of the edge position of the field of view. If the size of the grid cell is reduced to improve the precision, on the one hand, if the grid cell needs to be reduced to a small granularity to achieve enough precision, the complexity of searching for the source positioning is higher, and the computing performance and the storage performance of the computer equipment cannot be supported.
Whereas in fig. 3 and 4, the computer device performs two-stage division of the virtual scene; dividing the virtual scene into a plurality of square grid cells by the grid cells at the first stage; while the second stage is divided in base cells, dividing each grid cell further into 4 triangular base cells. On the one hand, when the computer equipment locates the exploration source, firstly, locating is carried out from the dimension of the grid cell, the grid cell where the exploration source is located is determined, then, the foundation cell where the exploration source is located is further located among 4 foundation cells contained in the located grid cell, and compared with the scheme that the foundation cell where the exploration source is located is directly located from a large number of foundation cells, the complexity of locating the exploration source can be greatly simplified; on the other hand, when the computer performs the view area calculation, the visual area and the non-visual area are distinguished by the basic unit, so that the edge progress of the view area is higher, for example, in fig. 3 and 4, the size of the minimum unit of the non-visual area (such as the filled area 31 in fig. 3 and the filled area 41 in fig. 4) and the visual area is only one fourth of the size of the grid unit, so that the accuracy of the edge of the view area can be greatly improved. For example, in contrast to fig. 2, in the grid cell 32 in fig. 3, only the base cells of two triangles corresponding to the virtual grass are divided into the non-visible areas, and correspondingly, in the grid cell 33 in fig. 3, also only the base cells of two triangles not corresponding to the virtual grass are divided into the visible areas. The situation in fig. 4 is similar to that of fig. 3 and will not be described again here.
Referring to fig. 5, a flowchart of a method for generating a picture of a virtual scene according to an exemplary embodiment of the present application is shown. The method may be performed by a computer device in which an application program for generating and displaying a virtual scene is running, for example, the computer device may be a terminal running a virtual scene client, or the computer device may also be a background server corresponding to the virtual scene client running in the terminal. As shown in fig. 5, the method may include the steps of:
step 501, acquiring position information of an exploration source in a virtual scene; the search source is a virtual object with a corresponding visual field distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each of the grid cells is divided into at least two base cells.
In the embodiment of the present application, the grid unit may be a square, rectangular, diamond or other area unit with larger size; the base unit may be an arbitrary-shaped area unit divided from the mesh unit as long as at least two base units can constitute one mesh unit.
The shapes of the respective base units may be the same, or the shapes of the respective base units may be different.
The dimensions of the individual base units may be the same or the dimensions of the individual base units may be different.
In one possible implementation, at least one edge of the base unit is non-parallel to each edge of the grid unit.
Since in a virtual scene there may be many obstacles (such as virtual grass), which may obstruct the view of the exploration source, the shape of the obstacles is generally irregular, and the shape of the grid cells is generally more regular, which results in a poor matching of the edges of the grid cells to the edges of some obstacles. In order to improve the matching degree of the edges of the base unit and the barrier, so as to improve the display effect of the edges of the field of view, in the embodiment of the application, at least one edge of the base unit divided in the grid unit may be unparallel to each edge in the grid unit, so that the matching degree between the base unit and the edges of the barrier is improved by the edges of different extending directions in the base unit.
For example, please refer to fig. 6 to 8, which illustrate three kinds of grid cell division diagrams according to an embodiment of the present application.
As shown in fig. 6, the mesh unit may be square, and the base unit may be two triangular meshes divided according to one diagonal 61 of the square.
As shown in fig. 7, the base unit may be 4 triangular meshes divided by two diagonal lines (diagonal line 71 and diagonal line 72) of a square.
The basic unit can also be at least two polygonal grids divided by connecting lines between the central point of the square and each side of the square; for example, as shown in fig. 8, the basic unit is 8 triangular meshes divided by lines between the center point 81 of the square and the midpoint of each side in the positive direction and the four vertices, respectively.
The embodiment of the present application is only illustrated in the division manner of the grid cells shown in fig. 6 to 8, but the division manner of the grid cells is not limited.
Step 502, positioning a target grid cell from a plurality of grid cells based on the position information of the exploration source; the target grid cell is the grid cell where the exploration source is located.
Taking a square or rectangular virtual scene as an example, the grid cells are square area cells, and each grid cell is arranged according to M rows and N columns in the virtual scene. When the computer equipment locates the target grid unit from the grid units, a certain vertex of the virtual scene is taken as a coordinate origin, two sides corresponding to the vertex are taken as coordinate axes, a plane rectangular coordinate system is established, the coordinate of the exploration source is the coordinate of the exploration source in the plane rectangular coordinate system, and the row number and the column number of the target grid unit can be obtained by dividing the horizontal coordinate value and the vertical coordinate value of the exploration source by the side length of the grid unit respectively.
For example, assuming that the side length of the target grid cell is 10, the coordinates of the exploration source in the virtual scene are (1255, 863), the abscissa 1255 of the exploration source is divided by 10, and the result is more than 125, and the target grid cell is located in column 126; correspondingly, the ordinate 863 of the exploration source is divided by 10, and the result is 86 more than 3, and the target grid unit is located in the 87 th row; that is, the 87 th row, 126 th column grid cell in the virtual scene is the target grid cell where the exploration source is located. Based on the number of rows and columns of target grid cells, the computer device may further determine an index or coordinates of the target grid cells.
Step 503, positioning a target basic unit from at least two basic units contained in the target grid unit based on the position information of the target grid unit and the position information of the exploration source; the target base unit is the base unit where the exploration source is located.
In the embodiment of the application, one grid cell may be represented by an index or coordinates of the grid cell in the virtual scene. Wherein, based on the index of the grid cell, the position information (such as coordinates) of the grid cell in the virtual scene can be queried or calculated.
The coordinates of one grid cell may be the coordinates of a specific point (such as the lower left vertex of a square grid cell) of the grid cell, and the coordinates may be combined with size information (such as the side length of a square) of the grid cell, so as to determine the position of the grid cell in the virtual scene.
In this embodiment of the present application, after determining the target grid cell, the computer device may determine at least two base cells included in the target grid cell according to the location information of the target grid cell, for example, determine respective indexes or coordinates of at least two base cells included in the target grid cell.
The method for determining at least two base units included in the target grid unit according to the position information of the target grid unit can be realized by the target grid unit and a coordinate conversion method (or a coordinate conversion formula) performed between the at least two base units in the target grid unit.
Coordinate transformation is the basis for subsequent calculations, and therefore requires the definition of the basic cell coordinate to grid cell coordinate (which may also be referred to as scene coordinate) transformation formula, and the grid cell coordinate to basic cell index/coordinate transformation formula.
In segmenting a scene into large grid cells, the developer can set the following known information: coordinate origin of grid cells in virtual scenes
Figure SMS_1
The number of two-dimensional grid cells is +.>
Figure SMS_2
Side length of each large grid cell +.>
Figure SMS_3
Referring to fig. 9, a schematic diagram of grid cell coordinate relationships according to an embodiment of the present application is shown. As shown in fig. 9, the mesh unit 91 is divided into base units of 4 triangles by two diagonal lines, and the derived mesh unit and base unit have the following relationship.
As shown, the starting coordinates in the virtual scene by the grid cells
Figure SMS_4
The barycentric coordinates of the base units of the four triangles in the virtual scene can be obtained +.>
Figure SMS_5
Take a as an example:
Figure SMS_6
Figure SMS_7
similarly, the coordinates of B, C, D in the virtual scene can be obtained.
From the above, if the index of the base unit of the triangle is known
Figure SMS_8
The index of the corresponding grid cell can be located +.>
Figure SMS_9
. Wherein, index->
Figure SMS_10
Indicating that the grid cell is the +.>
Figure SMS_11
Column->
Figure SMS_12
Grid cells of rows, corresponding scene coordinates of grid cells +.>
Figure SMS_13
The method comprises the following steps:
Figure SMS_14
Figure SMS_15
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_16
is the origin coordinates.
And then, according to the known conditions, the barycentric coordinates of the basic units of the triangle in the virtual scene can be calculated, and the conversion from index to scene coordinates is realized.
While the index conversion from scene coordinates to base units of triangles can be done first
Figure SMS_17
Index converted into corresponding grid cell +.>
Figure SMS_18
Figure SMS_19
Figure SMS_20
In one possible implementation, in response to the grid cell being a square grid, the base cell being a triangular grid divided by two diagonals of the grid cell, locating a target base cell from at least two base cells contained in the target grid cell based on the location information of the target grid cell and the location information of the exploration source, comprising:
acquiring distances from the exploration source to four sides of the target grid unit based on the position information of the target grid unit and the position information of the exploration source;
the target base unit is located from at least two base units contained in the target grid unit based on distances of the exploration source to four sides of the target grid unit.
Referring to fig. 10, a schematic diagram of exploratory source positions according to an embodiment of the present application is shown. As shown in fig. 10, when the grid cell is divided into 4 triangle base cells by two diagonal lines and the search source 1001 is in different base cells in the grid cell, the distance distribution between the search source 1001 and the four sides (Δa, Δb, Δc, Δd) of the grid cell is also different, and thus, by the distance between the search source 1001 and the four sides of the grid cell, the base cell between the four base cells can be determined.
In one possible implementation, the locating the target base unit from at least two base units contained in the target grid unit based on distances of four sides of the exploration source to the target grid unit includes:
comparing the distances from the exploration source to the four sides of the target grid unit in pairs to obtain the size relation between the distances from the exploration source to the four sides of the target grid unit;
based on the size relationship, the target base unit is located from at least two base units contained in the target grid unit.
As shown in fig. 10, after obtaining the coordinates of the grid cells in the virtual scene, the computer device may obtain the two-dimensional coordinates of the four sides of the grid cells according to the side lengths of the grid cells, and the characteristics of the basic cell division manner shown in fig. 10 may be deduced: the target basic unit where the exploration source is located is the basic unit of the side corresponding to the shortest distance among the distances from the coordinate point to the four sides of the grid unit. Thus, by comparing the magnitude relationships between Δa, Δb, Δc, Δd, it can be determined that Δa is minimum, and the search source is located in the base cell on the left side in the grid cell shown in fig. 10.
In one possible implementation, in response to the grid cell being a square grid, the base cell being a polygonal grid divided by lines between a center point of the square and each side of the square, locating a target base cell from at least two base cells contained in the target grid cell based on the location information of the target grid cell and the location information of the exploration source, comprising:
acquiring the center point coordinates of the target grid unit based on the position information of the target grid unit;
acquiring a connecting line included angle based on the central point coordinates of the target grid unit and the position information of the exploration source; the connecting line included angle is an included angle between a connecting line between the central point of the target grid unit and the exploration source and a reference line;
and positioning a target basic unit from at least two basic units contained in the target grid unit based on the connecting line included angle.
In this embodiment, when the grid unit is a square grid and the base unit is a polygonal grid divided by a line between the center point of the square and each side of the square, the boundary of each polygonal grid is a line between the center point of the square and each side of the square, that is, if a reference line is given, the angles of the center points of the corresponding squares in each polygonal grid form an angle of 360 ° and do not overlap each other, in other words, the center points of the squares are taken as the origin, and for any point in the square, if the angle between the point and the center point of the square is known, it can be determined which polygonal grid the point is located in. Based on the above principle, the computer device can determine the target base unit by exploring the angle between the line between the source and the center point of the square and the reference line.
Referring to fig. 11, an angular division diagram according to an embodiment of the present application is shown. As shown in fig. 11, taking the reference line as an x-axis as an example, the basic unit is 8 triangular grids respectively divided by the central point of the square and the connection line between the middle points of each side and the four vertexes of the square, wherein the angle range corresponding to the triangular grid 1101 is 0 ° to 45 °, the angle range corresponding to the triangular grid 1102 is 45 ° to 90 °, and the like, if the computer device calculates the connection line between the search source 1103 and the origin and the angle between the abscissa axis is 30 °, it can be determined that the search source 1103 is located in the triangular grid 1101.
In one possible implementation, in response to the grid cell being a square grid, the base cell being a triangular grid divided by lines between a center point of the square and each side of the square, locating a target base cell from at least two base cells contained in the target grid cell based on the location information of the target grid cell and the location information of the exploration source, comprising:
acquiring coordinates of respective center points of at least two basic units contained in the target grid unit based on the position information of the target grid unit;
Acquiring distances from the central points of at least two basic units contained in the target grid unit to the exploration source respectively based on the coordinates of the central points of the at least two basic units contained in the target grid unit and the position information of the exploration source;
a target base unit is located from the at least two base units contained in the target grid unit based on distances between respective center points of the at least two base units contained in the target grid unit and the exploration source.
For example, taking fig. 9 as an example, after the grid unit 91 is divided into 4 triangle base units according to two diagonal lines, the computer device may calculate the coordinates of the center point A, B, C, D of the 4 base units in the virtual scene, and calculate the distances from the search source to the center point A, B, C, D respectively based on the coordinates of the center point A, B, C, D in the virtual scene and the coordinates of the search source in the virtual scene, where the base unit of the center point corresponding to the minimum distance is the target base unit where the search source is located.
After determining the target base unit, the computer device may determine each visual base unit corresponding to the search source from base units around the target base unit based on the field of view distance and the location information of the target base unit.
Step 504, obtaining the visual range of the search source based on the visual field distance and the position information of the target base unit.
In one possible implementation, the obtaining the visual range of the search source based on the field of view distance and the location information of the target base unit includes:
acquiring position information of each obstacle in the virtual scene;
based on the visual field distance, the position information of the target base unit, and the position information of each obstacle, a visual range of the search source is acquired.
Since there are certain obstacles in the virtual scene, the obstacles may cause a blocking to the visible area of the exploration source, so that the exploration source may not be able to explore all areas within the surrounding visual field distance, in this embodiment of the application, after determining the target base unit, the computer device may further determine the visual field distance of the exploration source, the location information of the target base unit, and the location information of the obstacles.
The visual range of the search source may be a range in which a distance between the virtual scene and the search source is not greater than a visual field distance and no obstacle exists between the virtual scene and the search source.
Step 505, traversing each base unit in the visual range, and determining the visual base unit.
In one possible implementation, after determining the visual range of the exploration source, the computer device may traverse through the base units in the virtual scene to determine the base units therein that are within the visual range.
When the computer equipment determines whether a basic unit is in the visual range of the exploration source, the coordinate of the central point of the basic unit can be obtained, whether the coordinate of the central point of the basic unit is in the visual range of the exploration source or not is determined, if the coordinate of the central point of the basic unit is in the visual range of the exploration source, the basic unit is determined to be the visual basic unit in the visual range, otherwise, the basic unit is not considered to be the visual basic unit in the visual range.
Step 506 sets a state assignment for the visual base unit, the state assignment decreasing over time.
In this embodiment, when a base unit is determined to be a visual base unit, the computer device may set a state assignment for the base unit decreasing with time to indicate that the base unit is in a visual state for the camping where the exploration source is located, and correspondingly, when the state assignment decreases to 0, it indicates that the base unit exits the visual state for the camping where the exploration source is located.
In one possible implementation, setting state assignments for visual base units includes:
setting the state assignment of the visual basic unit to an initial value in response to the current state assignment of the visual basic unit being 0;
and resetting the state assignment of the visual basic unit to the initial value in response to the current state assignment of the visual basic unit not being 0.
In this embodiment of the present application, the computer device may periodically execute the steps 501 to 506, and in each execution process, when the current state of the visual basic unit determined at this time is assigned to 0, it is indicated that the visual basic unit is in an invisible state before the current time, and then the state assignment is directly set to an initial value; and when the current state assignment of the visual basic unit determined at the time is not 0, indicating that the visual basic unit is in a visual state before the current moment, and refreshing the duration time of the visual state of the visual basic unit.
Step 507, generating a scene picture of the virtual scene based on the visual basic unit; in the scene picture, the specified virtual object in the visual basic unit is in a visual state, and the specified virtual object is a virtual object which does not belong to the same camping as the search source.
In an embodiment of the present application, in response to the state assignment of the visual base unit not decrementing to 0, a scene picture of the virtual scene is generated based on the visual base unit.
That is, when the state assignment of the visual basic unit is not decremented to 0, the specified virtual object in the visual basic unit is in the visual state in the scene picture of the generated virtual scene.
Correspondingly, if the state assignment of a subsequent visual basic unit is decremented to 0, the visual basic unit exits the visual state and is converted into a non-visual basic unit, and in a scene picture of the virtual scene generated subsequently, the designated virtual object in the non-visual basic unit is in a non-visual state.
In the solution shown in the embodiment of the present application, generating, based on the visual basic unit, a scene picture of the virtual scene includes:
generating a global view map; the global view map is used for indicating whether each basic unit in the virtual scene is in a visual state or not; and the visual basic unit is in a visual state in the global view map;
based on the global view map, a scene picture of the virtual scene is generated.
In this embodiment, when generating a scene, the computer device may generate the scene from a global angle of the virtual scene, where a plurality of exploration sources are generally present corresponding to one of the campuses, and when generating the scene, the computer device merges visual basic units corresponding to the plurality of exploration sources to obtain a global visual basic unit, generates a global view map based on the global visual basic unit, so as to indicate which basic units in the virtual scene are lit up, and then generates the scene based on the global view map.
Through the steps of the embodiment of the application, the computer equipment can provide the edge display effect of the high-precision visual field area. Please refer to fig. 12-14, which illustrate schematic diagrams of scene images according to embodiments of the present application. As shown in fig. 12 to 14, the virtual grass in the virtual scene is located outside the visual field area of other camps outside the virtual object 1201, as shown in fig. 12, the virtual object 1201 is visible to the users of other camps when the virtual object 1201 is located outside the edge position of the virtual grass, and as shown in fig. 13 and 14, the virtual object 1201 is invisible to the users of other camps when the virtual object 1201 enters inside the edge position of the virtual grass.
Step 508, determining grid cells corresponding to the foggy effect in the virtual scene based on the global view map.
In the embodiment of the application, the misting effect can be rendered for the invisible area of the user in the virtual scene so as to improve the display authenticity of the invisible area. Generally, the edge position accuracy of the foggy effect has a small influence on the display effect of the scene picture of the virtual scene, and in order to improve the rendering efficiency of the foggy effect, in the embodiment of the present application, the computer device may perform the rendering of the foggy effect in units of grid units based on the global view map.
In one possible implementation, the determining, based on the global view map, a grid cell of a corresponding foggy effect in the virtual scene includes:
based on the global view map, acquiring state information of at least two basic units in the first grid unit, wherein the state information is used for indicating whether the corresponding basic units are in a visual state or not; the first grid cell is any grid cell in the virtual scene;
in response to the number of base cells in the first grid cell that are not in a visible state reaching a number threshold, the first grid cell is determined to be a grid cell that corresponds to a misting effect.
In the embodiment of the application, when the number of the base units which are not in the visual state in the base units in one grid unit reaches the number threshold, the computer device can confirm that the grid unit is the grid unit corresponding to the foggy effect.
For example, when one grid cell includes one or more base cells that are not in a visible state, the computer device may confirm that the grid cell is a grid cell corresponding to a foggy effect, taking the number threshold as 1.
The number threshold may be set by a developer according to the division condition of the base unit, or may be set according to a configuration operation of a user. The configuration operation may be a user configuration operation for picture quality.
Step 509, rendering the foggy effect in the scene frame of the virtual scene based on the grid cells corresponding to the foggy effect in the virtual scene.
The computer device may render the fog effect at the grid cell corresponding to the fog effect in the scene picture of the virtual scene, and then display the fog effect in the display interface of the virtual scene.
In the embodiment of the application, the positioning of the target base unit by the target grid unit is actually up-sampling (improving accuracy) of the target grid unit, and the determining of the grid unit corresponding to the fog effect in the virtual scene based on the global view map is actually down-sampling (reducing accuracy) of the base unit, that is, in the embodiment of the application, the computer device can flexibly use the up-sampling base unit (such as triangle) and the down-sampling grid unit (such as square) to balance the accuracy and the operation speed in the view and fog calculation.
Referring to fig. 15, a schematic diagram of a visual field and a foggy calculation process according to an embodiment of the present application is shown. As shown in fig. 15, taking an example that a square grid cell is divided into a plurality of triangle base cells, the process may include the steps of:
S1501, first, the on-site character is traversed to select an exploration source, such as hero, soldier, bullet with exploration function, and the like.
S1502, converting scene coordinates of the exploration source into basic triangle indexes, and then acquiring all triangle areas in the maximum visual field range according to configuration.
S1503, the maximum visual field range is overlapped with the obstacle mask pre-calculated by the corresponding basic triangle, so that the visible basic triangle grid (namely the visible range) of the exploration source in the grid can be obtained finally.
S1504, traversing the visual basic triangular mesh to carry out lighting assignment, wherein the assignment number is equal to the lighting duration.
In S1505, each time the main cycle, the lighted grid value is attenuated, and the non-lighted state is returned when the value is 0.
S1506, a global view map is obtained, and the map may use a downsampling scheme to generate an original map for rendering the fog in units of grid cells to reduce the accuracy and increase the operation rate, where the obtained original map for rendering the fog is shown in FIG. 16, and the white region 1601 is a non-fog rendering region, and the rest of the black regions are fog rendering regions.
In summary, the scheme shown in the embodiment of the present application divides the virtual scene in advance according to the grid unit, then further divides the grid unit into at least two base units, when determining the view field range in the virtual scene, first determines the grid unit where the search source is located, then determines the base unit where the search source is located based on the grid unit where the search source is located, and determines the visual base unit of the search source based on the base unit where the search source is located, where the scene image of the virtual scene is generated based on the visual base unit of the search source.
Fig. 17 is a block diagram of a virtual scene generating apparatus according to an exemplary embodiment of the present application, which may be used to perform all or part of the steps in the method shown in fig. 1 or fig. 5 described above. As shown in fig. 17, the apparatus includes:
a location information acquiring module 1701, configured to acquire location information of an exploration source in a virtual scene; the search source is a virtual object with a corresponding visual field distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each grid cell is divided into at least two base cells; at least one side exists in the basic unit, and is not parallel to each side in the grid unit;
a grid cell positioning module 1702 configured to position a target grid cell from a plurality of the grid cells based on the position information of the search source; the target grid cell is the grid cell where the exploration source is located;
a target base unit positioning module 1703, configured to position a target base unit from at least two base units included in the target grid unit based on the position information of the target grid unit and the position information of the exploration source; the target base unit is the base unit where the exploration source is located;
A base unit determining module 1704, configured to determine, from base units around the target base unit, each visual base unit corresponding to the search source based on the field of view distance and the position information of the target base unit;
a picture generation module 1705 for generating a scene picture of the virtual scene based on the visual base unit; in the scene picture, the specified virtual object in the visual basic unit is in a visual state, and the specified virtual object is a virtual object which does not belong to the same camp with the exploration source.
In one possible implementation, in response to the grid cell being a square grid, the base cell is a triangular grid divided by two diagonals of the grid cell,
the target base unit location module 1703, for,
acquiring distances from the exploration source to four sides of the target grid unit based on the position information of the target grid unit and the position information of the exploration source;
the target base unit is located from at least two base units contained in the target grid unit based on distances from the exploration source to four sides of the target grid unit.
In one possible implementation, a target base unit positioning module 1703, when positioning the target base unit from at least two base units contained in the target grid unit based on distances of the exploration source to four sides of the target grid unit, is configured to,
comparing the distances from the exploration source to the four sides of the target grid unit in pairs to obtain the size relation between the distances from the exploration source to the four sides of the target grid unit;
based on the size relationship, the target base unit is located from at least two base units contained in the target grid unit.
In one possible implementation, in response to the grid cell being a square grid, the base cell is a triangular grid divided by lines between the center point of the square and the sides of the square, the target base cell positioning module 1703,
acquiring center point coordinates of the target grid unit based on the position information of the target grid unit;
acquiring a connecting line included angle based on the central point coordinates of the target grid unit and the position information of the exploration source; the connecting line included angle is an included angle between a connecting line between the central point of the target grid unit and the exploration source and a reference line;
And positioning a target basic unit from at least two basic units contained in the target grid unit based on the connecting line included angle.
In one possible implementation, in response to the grid cell being a square grid, the base cell is a triangular grid divided by lines between the center point of the square and the sides of the square, the target base cell positioning module 1703,
acquiring coordinates of respective center points of at least two basic units contained in the target grid unit based on the position information of the target grid unit;
acquiring distances between the respective center points of at least two basic units contained in the target grid unit and the exploration source based on the coordinates of the respective center points of at least two basic units contained in the target grid unit and the position information of the exploration source;
a target base unit is located from the at least two base units contained in the target grid unit based on distances between respective center points of the at least two base units contained in the target grid unit and the exploration source.
In one possible implementation, the base unit determination module 1704 is configured to determine, based on the received data,
Acquiring a visual range of the exploration source based on the visual field distance and the position information of the target basic unit;
traversing each basic unit in the visual range, and determining the visual basic unit.
In one possible implementation, the visual range of the exploration source is obtained based on the field of view distance, and the location information of the target base unit, base unit determination module 1704,
acquiring position information of each obstacle in the virtual scene;
and acquiring the visual range of the search source based on the visual field distance, the position information of the target base unit and the position information of each obstacle.
In one possible implementation, the apparatus further includes:
an assignment module, configured to set a state assignment for the visual basic unit before the picture generation module 1705 generates a scene picture of the virtual scene based on the visual basic unit, where the state assignment decreases with time;
the picture generation module 1705 is configured to generate a scene picture of the virtual scene based on the visual base unit in response to the state assignment of the visual base unit not decrementing to 0.
In one possible implementation, the assignment module is configured to,
setting the state assignment of the visual basic unit to an initial value in response to the current state assignment of the visual basic unit being 0;
and resetting the state assignment of the visual basic unit to the initial value in response to the current state assignment of the visual basic unit not being 0.
In one possible implementation, the frame generation module 1705 is configured to,
generating a global view map; the global view map is used for indicating whether each basic unit in the virtual scene is in a visual state or not; and the visual basic unit is in a visual state in the global view map;
and generating a scene picture of the virtual scene based on the global view map.
In one possible implementation, the apparatus further includes:
the grid cell determining module is used for determining grid cells corresponding to the foggy effect in the virtual scene based on the global view map;
and the foggy rendering module is used for rendering the foggy effect in the scene picture of the virtual scene based on the grid cells corresponding to the foggy effect in the virtual scene.
In one possible implementation, the grid cell determination module is configured to determine, based on the grid cell determination module,
Acquiring state information of at least two basic units in a first grid unit based on the global view map, wherein the state information is used for indicating whether the corresponding basic units are in a visual state or not; the first grid cell is any grid cell in the virtual scene;
in response to the number of base cells in the first grid cell that are not in a visible state reaching a number threshold, the first grid cell is determined to be a grid cell that corresponds to a misting effect.
In summary, the scheme shown in the embodiment of the present application divides the virtual scene in advance according to the grid unit, then further divides the grid unit into at least two base units, when determining the view field range in the virtual scene, first determines the grid unit where the search source is located, then determines the base unit where the search source is located based on the grid unit where the search source is located, and determines the visual base unit of the search source based on the base unit where the search source is located, where the scene image of the virtual scene is generated based on the visual base unit of the search source.
Fig. 18 shows a block diagram of a computer device 1800 provided by an exemplary embodiment of the present application. The computer device 1800 may be a terminal, such as: smart phones, tablet computers, MP3 players (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) players, notebook or desktop computers, etc.
In general, the computer device 1800 includes: a processor 1801 and a memory 1802.
Processor 1801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1801 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ).
The memory 1802 may include one or more computer-readable storage media, which may be non-transitory. The memory 1802 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1802 is used to store at least one computer program/instruction for execution by processor 1801 to implement the methods provided by the method embodiments herein.
In some embodiments, the computer device 1800 may also optionally include: a peripheral interface 1803 and at least one peripheral. The processor 1801, memory 1802, and peripheral interface 1803 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 1803 by buses, signal lines or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1804, a display screen 1805, a camera assembly 1806, an audio circuit 1807, a positioning assembly 1808, and a power supply 1809.
The peripheral interface 1803 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1801 and memory 1802.
The Radio Frequency circuit 1804 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1804 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1804 converts electrical signals to electromagnetic signals for transmission, or converts received electromagnetic signals to electrical signals.
The display 1805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1805 is a touch display, the display 1805 also has the ability to collect touch signals at or above the surface of the display 1805. The touch signal may be input as a control signal to the processor 1801 for processing. At this point, the display 1805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard.
The camera assembly 1806 is used to capture images or video. Optionally, the camera assembly 1806 includes a front camera and a rear camera.
The audio circuitry 1807 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1801 for processing, or inputting the electric signals to the radio frequency circuit 1804 for realizing voice communication. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. In some embodiments, the audio circuitry 1807 may also include a headphone jack.
A power supply 1809 is used to power the various components in the computer device 1800. The power supply 1809 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
In some embodiments, the computer device 1800 also includes one or more sensors 1810. The one or more sensors 1810 include, but are not limited to: acceleration sensor 1811, gyroscope sensor 1812, pressure sensor 1813, fingerprint sensor 1814, optical sensor 1815, and proximity sensor 1816.
Those skilled in the art will appreciate that the architecture shown in fig. 18 is not limiting and that more or fewer components than shown may be included or that certain components may be combined or that a different arrangement of components may be employed.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing related hardware, and the program may be stored in a computer readable storage medium, which may be a computer readable storage medium included in the memory of the above embodiments; or may be a computer-readable storage medium, alone, that is not incorporated into the terminal. The computer readable storage medium has stored therein at least one computer program that is loaded and executed by a processor to implement the methods described in the various embodiments of the present application.
Alternatively, the computer-readable storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), solid state disk (SSD, solid State Drives), or optical disk, etc. The random access memory may include resistive random access memory (ReRAM, resistance Random Access Memory) and dynamic random access memory (DRAM, dynamic Random Access Memory), among others. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods described in the above embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the presently disclosed aspects. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise construction/arrangements shown in the drawings and described above, and that various modifications and changes may be effected therein without departing from the scope thereof. The scope of the application is limited by the appended claims.

Claims (15)

1. A picture generation method of a virtual scene, the method comprising:
acquiring position information of an exploration source in a virtual scene; the search source is a virtual object with a corresponding visual field distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each grid cell is divided into at least two base cells; at least one side exists in the basic unit, and is not parallel to each side in the grid unit;
locating a target grid cell from a plurality of the grid cells based on the location information of the exploration source; the target grid cell is the grid cell where the exploration source is located;
locating a target base unit from at least two base units contained in the target grid unit based on the position information of the target grid unit and the position information of the exploration source; the target base unit is the base unit where the exploration source is located;
Determining each visual basic unit corresponding to the exploration source from basic units around the target basic unit based on the visual field distance and the position information of the target basic unit;
generating a scene picture of the virtual scene based on the visual basic unit; in the scene picture, the specified virtual object in the visual basic unit is in a visual state, and the specified virtual object is a virtual object which does not belong to the same camp with the exploration source.
2. The method of claim 1, wherein, in response to the grid cell being a square grid, the base cell is a triangular grid divided by two diagonals of the grid cell,
the positioning a target base unit from at least two base units contained in the target grid unit based on the position information of the target grid unit and the position information of the exploration source comprises the following steps:
acquiring distances from the exploration source to four sides of the target grid unit based on the position information of the target grid unit and the position information of the exploration source;
the target base unit is located from at least two base units contained in the target grid unit based on distances from the exploration source to four sides of the target grid unit.
3. The method of claim 2, wherein the locating the target base unit from at least two base units contained in the target grid unit based on distances of four sides of the exploration source from the target grid unit comprises:
comparing the distances from the exploration source to the four sides of the target grid unit in pairs to obtain the size relation between the distances from the exploration source to the four sides of the target grid unit;
based on the size relationship, the target base unit is located from at least two base units contained in the target grid unit.
4. The method of claim 1, wherein, in response to the grid cell being a square grid, the base cell is a triangular grid divided by lines between a center point of the square and each side of the square,
the positioning a target base unit from at least two base units contained in the target grid unit based on the position information of the target grid unit and the position information of the exploration source comprises the following steps:
acquiring center point coordinates of the target grid unit based on the position information of the target grid unit;
Acquiring a connecting line included angle based on the central point coordinates of the target grid unit and the position information of the exploration source; the connecting line included angle is an included angle between a connecting line between the central point of the target grid unit and the exploration source and a reference line;
and positioning a target basic unit from at least two basic units contained in the target grid unit based on the connecting line included angle.
5. The method of claim 1, wherein, in response to the grid cell being a square grid, the base cell is a triangular grid divided by lines between a center point of the square and each side of the square,
the positioning a target base unit from at least two base units contained in the target grid unit based on the position information of the target grid unit and the position information of the exploration source comprises the following steps:
acquiring coordinates of respective center points of at least two basic units contained in the target grid unit based on the position information of the target grid unit;
acquiring distances between the respective center points of at least two basic units contained in the target grid unit and the exploration source based on the coordinates of the respective center points of at least two basic units contained in the target grid unit and the position information of the exploration source;
A target base unit is located from the at least two base units contained in the target grid unit based on distances between respective center points of the at least two base units contained in the target grid unit and the exploration source.
6. The method of claim 1, wherein the determining each visual base unit corresponding to the search source from base units around the target base unit based on the field of view distance and the location information of the target base unit comprises:
acquiring a visual range of the exploration source based on the visual field distance and the position information of the target basic unit;
traversing each basic unit in the visual range, and determining the visual basic unit.
7. The method of claim 6, wherein the obtaining the visual range of the exploration source based on the field of view distance and the location information of the target base unit comprises:
acquiring position information of each obstacle in the virtual scene;
and acquiring the visual range of the search source based on the visual field distance, the position information of the target base unit and the position information of each obstacle.
8. The method of claim 1, wherein prior to generating a scene picture of the virtual scene based on the visual base unit, further comprising:
setting state assignments for the visual basic units, wherein the state assignments are decreased along with time;
the generating a scene picture of the virtual scene based on the visual basic unit includes:
and generating a scene picture of the virtual scene based on the visual basic unit in response to the state assignment of the visual basic unit not decrementing to 0.
9. The method of claim 8, wherein said setting state assignments for said visual base unit comprises:
setting the state assignment of the visual basic unit to an initial value in response to the current state assignment of the visual basic unit being 0;
and resetting the state assignment of the visual basic unit to the initial value in response to the current state assignment of the visual basic unit not being 0.
10. The method of claim 1, wherein the generating a scene picture of the virtual scene based on the visual base unit comprises:
generating a global view map; the global view map is used for indicating whether each basic unit in the virtual scene is in a visual state or not; and the visual basic unit is in a visual state in the global view map;
And generating a scene picture of the virtual scene based on the global view map.
11. The method according to claim 10, wherein the method further comprises:
determining grid cells corresponding to the foggy effect in the virtual scene based on the global view map;
and rendering the fog effect in a scene picture of the virtual scene based on the grid cells corresponding to the fog effect in the virtual scene.
12. The method of claim 11, wherein the determining grid cells of the virtual scene that correspond to a foggy effect based on the global view map comprises:
acquiring state information of at least two basic units in a first grid unit based on the global view map, wherein the state information is used for indicating whether the corresponding basic units are in a visual state or not; the first grid cell is any grid cell in the virtual scene;
in response to the number of base cells in the first grid cell that are not in a visible state reaching a number threshold, the first grid cell is determined to be a grid cell that corresponds to a misting effect.
13. A picture generation apparatus of a virtual scene, the apparatus comprising:
The position information acquisition module is used for acquiring the position information of the exploration source in the virtual scene; the search source is a virtual object with a corresponding visual field distance in the virtual scene; the virtual scene is divided into a plurality of grid cells, and each grid cell is divided into at least two base cells; at least one side exists in the basic unit, and is not parallel to each side in the grid unit;
a grid cell positioning module for positioning a target grid cell from among the plurality of grid cells based on the position information of the exploration source; the target grid cell is the grid cell where the exploration source is located;
a target base unit positioning module, configured to position a target base unit from at least two base units included in the target grid unit based on the position information of the target grid unit and the position information of the exploration source; the target base unit is the base unit where the exploration source is located;
a base unit determining module, configured to determine, from base units around the target base unit, each visual base unit corresponding to the search source based on the field of view distance and the position information of the target base unit;
The picture generation module is used for generating a scene picture of the virtual scene based on the visual basic unit; in the scene picture, the specified virtual object in the visual basic unit is in a visual state, and the specified virtual object is a virtual object which does not belong to the same camp with the exploration source.
14. A computer device comprising a processor and a memory, wherein the memory stores at least one computer program, the at least one computer program being loaded and executed by the processor to implement the method of picture generation of a virtual scene as claimed in any one of claims 1 to 12.
15. A computer readable storage medium, wherein at least one computer program is stored in the readable storage medium, the at least one computer program being loaded and executed by a processor to implement the method of generating pictures of a virtual scene as claimed in any one of claims 1 to 12.
CN202110750124.1A 2021-07-02 2021-07-02 Picture generation method and device of virtual scene, computer equipment and storage medium Active CN113426131B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110750124.1A CN113426131B (en) 2021-07-02 2021-07-02 Picture generation method and device of virtual scene, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110750124.1A CN113426131B (en) 2021-07-02 2021-07-02 Picture generation method and device of virtual scene, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113426131A CN113426131A (en) 2021-09-24
CN113426131B true CN113426131B (en) 2023-06-30

Family

ID=77758805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110750124.1A Active CN113426131B (en) 2021-07-02 2021-07-02 Picture generation method and device of virtual scene, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113426131B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000149059A (en) * 1996-07-25 2000-05-30 Sega Enterp Ltd Device and method for processing images, game system and vehicle play machine
WO2000042576A2 (en) * 1999-01-12 2000-07-20 Schlumberger Limited Scalable visualization for interactive geometry modeling
CN105957002A (en) * 2016-04-20 2016-09-21 山东大学 Image interpolation enlargement method and device based on triangular grid
CN106446351A (en) * 2016-08-31 2017-02-22 郑州捷安高科股份有限公司 Real-time drawing-oriented large-scale scene organization and scheduling technology and simulation system
CN108619721A (en) * 2018-04-27 2018-10-09 腾讯科技(深圳)有限公司 Range information display methods, device and computer equipment in virtual scene
CN109523621A (en) * 2018-11-15 2019-03-26 腾讯科技(深圳)有限公司 Loading method and device, storage medium, the electronic device of object
CN109925715A (en) * 2019-01-29 2019-06-25 腾讯科技(深圳)有限公司 A kind of virtual waters generation method, device and terminal
CN110136262A (en) * 2019-05-17 2019-08-16 中科三清科技有限公司 Water body virtual visualization method and apparatus
CN110755845A (en) * 2019-10-21 2020-02-07 腾讯科技(深圳)有限公司 Virtual world picture display method, device, equipment and medium
CN110812844A (en) * 2019-11-06 2020-02-21 网易(杭州)网络有限公司 Path finding method in game, terminal and readable storage medium
CN111932683A (en) * 2020-08-06 2020-11-13 北京理工大学 Semantic-driven virtual pet behavior generation method under mixed reality scene
CN112245926A (en) * 2020-11-16 2021-01-22 腾讯科技(深圳)有限公司 Virtual terrain rendering method, device, equipment and medium
CN112494941A (en) * 2020-12-14 2021-03-16 网易(杭州)网络有限公司 Display control method and device of virtual object, storage medium and electronic equipment
CN112569602A (en) * 2020-12-25 2021-03-30 珠海金山网络游戏科技有限公司 Method and device for constructing terrain in virtual scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10201389B2 (en) * 2016-10-31 2019-02-12 Edda Technology, Inc. Method and system for interactive grid placement and measurements for lesion removal

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000149059A (en) * 1996-07-25 2000-05-30 Sega Enterp Ltd Device and method for processing images, game system and vehicle play machine
WO2000042576A2 (en) * 1999-01-12 2000-07-20 Schlumberger Limited Scalable visualization for interactive geometry modeling
CN105957002A (en) * 2016-04-20 2016-09-21 山东大学 Image interpolation enlargement method and device based on triangular grid
CN106446351A (en) * 2016-08-31 2017-02-22 郑州捷安高科股份有限公司 Real-time drawing-oriented large-scale scene organization and scheduling technology and simulation system
CN108619721A (en) * 2018-04-27 2018-10-09 腾讯科技(深圳)有限公司 Range information display methods, device and computer equipment in virtual scene
CN109523621A (en) * 2018-11-15 2019-03-26 腾讯科技(深圳)有限公司 Loading method and device, storage medium, the electronic device of object
CN109925715A (en) * 2019-01-29 2019-06-25 腾讯科技(深圳)有限公司 A kind of virtual waters generation method, device and terminal
CN110136262A (en) * 2019-05-17 2019-08-16 中科三清科技有限公司 Water body virtual visualization method and apparatus
CN110755845A (en) * 2019-10-21 2020-02-07 腾讯科技(深圳)有限公司 Virtual world picture display method, device, equipment and medium
CN110812844A (en) * 2019-11-06 2020-02-21 网易(杭州)网络有限公司 Path finding method in game, terminal and readable storage medium
CN111932683A (en) * 2020-08-06 2020-11-13 北京理工大学 Semantic-driven virtual pet behavior generation method under mixed reality scene
CN112245926A (en) * 2020-11-16 2021-01-22 腾讯科技(深圳)有限公司 Virtual terrain rendering method, device, equipment and medium
CN112494941A (en) * 2020-12-14 2021-03-16 网易(杭州)网络有限公司 Display control method and device of virtual object, storage medium and electronic equipment
CN112569602A (en) * 2020-12-25 2021-03-30 珠海金山网络游戏科技有限公司 Method and device for constructing terrain in virtual scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Machined sharp edge restoration for triangle mesh workpiece models derived from grid-based machining simulation;Ziqi Wang. et al;Computer-Aided Design and Applications;第15卷(第6期);905-915 *
基于可视范围的图像检索方法;吴勇;姚凌;童为民;;地球信息科学学报(第08期);24-30 *

Also Published As

Publication number Publication date
CN113426131A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
US11256384B2 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
US11565181B2 (en) Virtual object control method and apparatus, computer device, and storage medium
US11224810B2 (en) Method and terminal for displaying distance information in virtual scene
CN109754454B (en) Object model rendering method and device, storage medium and equipment
US20200316472A1 (en) Method for displaying information in a virtual environment
WO2019153836A1 (en) Method and device for determining attitude of virtual object in virtual environment, and medium
US11798223B2 (en) Potentially visible set determining method and apparatus, device, and storage medium
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
CN112245926B (en) Virtual terrain rendering method, device, equipment and medium
US20230082928A1 (en) Virtual aiming control
US20230072762A1 (en) Method and apparatus for displaying position mark, device, and storage medium
JP7186901B2 (en) HOTSPOT MAP DISPLAY METHOD, DEVICE, COMPUTER DEVICE AND READABLE STORAGE MEDIUM
US20220291791A1 (en) Method and apparatus for determining selected target, device, and storage medium
US20220032188A1 (en) Method for selecting virtual objects, apparatus, terminal and storage medium
CN113426131B (en) Picture generation method and device of virtual scene, computer equipment and storage medium
CN113018865B (en) Climbing line generation method and device, computer equipment and storage medium
CN114241096A (en) Three-dimensional model generation method, device, equipment and storage medium
CN113797531A (en) Method and device for realizing occlusion rejection, computer equipment and storage medium
CN112717393A (en) Virtual object display method, device, equipment and storage medium in virtual scene
CN113058266B (en) Method, device, equipment and medium for displaying scene fonts in virtual environment
CN115631320B (en) Pre-calculation cell display method, pre-calculation cell generation method and device
CN116993946A (en) Model generation method, device, terminal and storage medium
CN117815652A (en) Virtual object rendering method, device, equipment and storage medium
CN117065330A (en) Fluid special effect processing method, device, computer equipment and storage medium
CN117771661A (en) Method, device, terminal and storage medium for shielding and cutting scene object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40052344

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant