CN113289334A - Game scene display method and device - Google Patents

Game scene display method and device Download PDF

Info

Publication number
CN113289334A
CN113289334A CN202110528116.2A CN202110528116A CN113289334A CN 113289334 A CN113289334 A CN 113289334A CN 202110528116 A CN202110528116 A CN 202110528116A CN 113289334 A CN113289334 A CN 113289334A
Authority
CN
China
Prior art keywords
virtual
terrain
target
block
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110528116.2A
Other languages
Chinese (zh)
Inventor
谭清宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110528116.2A priority Critical patent/CN113289334A/en
Publication of CN113289334A publication Critical patent/CN113289334A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a device for displaying a game scene. Wherein, the method comprises the following steps: acquiring a virtual three-dimensional terrain model in a game scene, wherein the virtual three-dimensional terrain model is composed of a plurality of virtual terrain blocks; determining position information and visual field information of the virtual character in a game scene; when the minimum distance between the position information and the edges of the virtual three-dimensional terrain model is smaller than a preset distance, selecting a target terrain block to be displayed from a plurality of virtual terrain blocks based on the visual field information; and displaying the target terrain block in a preset area, wherein the preset area is an area outside the area corresponding to the virtual three-dimensional terrain model or the area corresponding to the virtual three-dimensional terrain model. The invention solves the technical problem of poor display effect when the game scene is circularly displayed in the prior art.

Description

Game scene display method and device
Technical Field
The invention relates to the field of computers, in particular to a method and a device for displaying a game scene.
Background
Currently, in some game scenes, due to the limited size of the game scene, the experience of a game player is strange when a virtual character in a game moves to the boundary of the game scene. In order to compensate for the strange experience, the game scene needs to be displayed in a seamless cycle, so that the effect of endless game scene is achieved. In the prior art, seamless circular display of game scenes is mainly realized by the following two ways:
the first method is as follows: by means of instantaneous movement of the virtual character or camera. In this method, when the virtual character moves to the boundary of the game scene, the game scene is instantaneously moved to achieve the effect of no boundary of the game scene.
The second method comprises the following steps: the method is realized by setting the game scene as a spherical scene. In this way, the effect of no boundary of the game scene is realized through a spherical scene such as a planet.
However, in the first mode, the instant movement of the virtual character or the camera causes the instant change of the environment around the virtual character, which is easy to attract the attention of the game player and reduces the game experience of the game player. To compensate for the above-mentioned transient changes, game developers need to perform more work.
In the second mode, the spherical scene is difficult to make, and the game system cannot be used normally without the support of sophisticated game engines (e.g., Landscape and Terrain game engines) and game tools. Moreover, the mode has requirements on the scale of the game scene, if the scale of the game scene is smaller, the curvature of the spherical surface of the spherical scene is larger, so that a game player can generate strange feeling, and even the design of the game playing method is influenced; if the size of the game scene is large, the production cost of the game scene is increased, and the requirement on the play design is also increased.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for displaying a game scene, which are used for at least solving the technical problem of poor display effect when the game scene is circularly displayed in the prior art.
According to an aspect of the embodiments of the present invention, there is provided a method for displaying a game scene, including: acquiring a virtual three-dimensional terrain model in a game scene, wherein the virtual three-dimensional terrain model is composed of a plurality of virtual terrain blocks; determining position information and visual field information of the virtual character in a game scene; when the minimum distance between the position information and the edges of the virtual three-dimensional terrain model is smaller than a preset distance, selecting a target terrain block to be displayed from a plurality of virtual terrain blocks based on the visual field information; and displaying the target terrain block in a preset area, wherein the preset area is an area outside the area corresponding to the virtual three-dimensional terrain model or the area corresponding to the virtual three-dimensional terrain model.
Further, the method for displaying the game scene further comprises the following steps: after a virtual three-dimensional terrain model in a game scene is obtained, obtaining a model position of the virtual model contained in the virtual three-dimensional terrain model; the virtual three-dimensional terrain model is divided into a plurality of virtual terrain blocks according to the model positions.
Further, the method for displaying the game scene further comprises the following steps: before a target terrain block to be displayed is selected from a plurality of virtual terrain blocks based on visual field information, a first enclosure box corresponding to each virtual model in a game scene is obtained; and performing union operation on the first bounding boxes corresponding to all the virtual models to obtain a second bounding box corresponding to the game scene.
Further, the method for displaying the game scene further comprises the following steps: after a target terrain block to be displayed is selected from a plurality of virtual terrain blocks based on visual field information, acquiring offset positions of the target terrain block mapped to a plurality of preset areas, wherein the plurality of preset areas comprise areas corresponding to virtual three-dimensional terrain models and other areas which are positioned outside the areas corresponding to the virtual three-dimensional terrain models and are connected with the boundaries of the virtual three-dimensional terrain models; calculating the distance between the position information and the offset position of the virtual character; determining a target offset position corresponding to the minimum distance; and determining a preset area where the target offset position is located as a position to be displayed corresponding to the target terrain block.
Further, the method for displaying the game scene further comprises the following steps: acquiring a first offset position corresponding to each preset area in a plurality of preset areas and area information corresponding to each preset area, wherein the area information at least comprises length information and width information corresponding to each preset area; acquiring first position information of a target terrain block in a virtual three-dimensional terrain model; calculating the product of the first offset position and the area information to obtain a first result; and calculating the sum of the first result and the first position information to obtain the offset position mapped to each preset area by the target terrain block.
Further, the method for displaying the game scene further comprises the following steps: acquiring a visual field range corresponding to the visual field information; and determining a virtual terrain block which does not intersect with the visual field range from a plurality of virtual terrain blocks contained in the first enclosure box to obtain a target terrain block.
Further, the method for displaying the game scene further comprises the following steps: comparing the visual field range with the bounding box range of the second bounding box to obtain a comparison result; determining a setting mode for setting the target terrain block on the position to be displayed according to the comparison result; and displaying the target terrain block on the position to be displayed.
Further, the method for displaying the game scene further comprises the following steps: and under the condition that the visual field range is smaller than or equal to the range of the bounding box, moving the target terrain block to the position to be displayed.
Further, the method for displaying the game scene further comprises the following steps: and copying the target terrain block to the position to be displayed under the condition that the visual field range is larger than the range of the bounding box.
Further, the method for displaying the game scene further comprises the following steps: after copying the target terrain block to a position to be displayed, acquiring a third bounding box consisting of the virtual three-dimensional terrain model and the copied target terrain block; determining a virtual terrain block outside the field of view from the third bounding box; and carrying out destruction operation on the virtual terrain block out of the visual field range.
According to another aspect of the embodiments of the present invention, there is also provided a game scene display apparatus, including: the game system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a virtual three-dimensional terrain model in a game scene, and the virtual three-dimensional terrain model consists of a plurality of virtual terrain blocks; the first determining module is used for determining the position information and the visual field information of the virtual character in the game scene; the second determining module is used for selecting a target terrain block to be displayed from the plurality of virtual terrain blocks based on the visual field information when the minimum distance between the position information and the plurality of edges of the virtual three-dimensional terrain model is smaller than a preset distance; and the display module is used for displaying the target terrain block in a preset area, wherein the preset area is an area outside the area corresponding to the virtual three-dimensional terrain model or the area corresponding to the virtual three-dimensional terrain model.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium, in which a computer program is stored, where the computer program is configured to execute the above-mentioned method for presenting a game scene when running.
According to another aspect of the embodiments of the present invention, there is also provided a processor for executing a program, wherein the program is configured to execute the above-mentioned method for presenting a game scene when running.
In the embodiment of the invention, a mode of realizing a seamless circular scene effect according to the position information and the visual field information of the virtual character is adopted, after a virtual three-dimensional terrain model containing a plurality of virtual terrain blocks in a game scene is obtained, the position information and the visual field information of the virtual character in the game scene are determined, then when the minimum distance between the position information and a plurality of edges of the virtual three-dimensional terrain model is smaller than a preset distance, a target terrain block to be displayed is selected from the virtual terrain blocks based on the visual field information, and the target terrain block is displayed in an area outside the area corresponding to the virtual three-dimensional terrain model or the area corresponding to the virtual three-dimensional terrain model.
In the process, the target terrain block to be displayed is determined according to the visual field information of the virtual character, and the target terrain block to be displayed is displayed in an area outside the virtual three-dimensional terrain model, so that the effect of seamless circular display of the game scene can be realized. In addition, the process only realizes seamless connection of the game scenes through the target terrain blocks, and transient movement is not performed on the game characters or the cameras, so that transient change caused by transient movement of the virtual characters or the cameras is avoided. In addition, the scheme reduces the requirement on the scale of the game scene and reduces the manufacturing cost of the game scene.
Therefore, the scheme provided by the application achieves the purpose of seamless circular display of the game scene, so that the technical effect of improving the game experience of game players is achieved, and the technical problem of poor display effect existing in the circular display of the game scene in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method for displaying a game scene according to an embodiment of the invention;
FIG. 2 is a schematic illustration of an alternative virtual three-dimensional terrain model according to embodiments of the present invention;
FIG. 3 is a schematic diagram of an optional plurality of virtual terrain blocks according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an alternative game scenario in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of an alternative game scenario in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram of an alternative game scenario in accordance with an embodiment of the present invention;
FIG. 7 is a schematic diagram of a display device of a game scene according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for presenting a game scenario, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system, such as a set of computer-executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Further, it should be noted that a game terminal that runs a game may be the execution subject of the present embodiment. Optionally, the game terminal may be, but is not limited to, a smart phone, a tablet, a computer, etc.
Fig. 1 is a flowchart of a method for displaying a game scene according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S102, a virtual three-dimensional terrain model in a game scene is obtained, wherein the virtual three-dimensional terrain model is composed of a plurality of virtual terrain blocks.
In step S102, the virtual three-dimensional terrain model may be a terrain model in a game scene, for example, the virtual three-dimensional terrain model shown in fig. 2. In addition, in the process of carrying out the block processing on the virtual three-dimensional terrain model, the virtual three-dimensional terrain model can be mapped into a two-dimensional terrain model, and then the two-dimensional terrain model is subjected to the block processing, so that a plurality of virtual terrain models are obtained. For example, the virtual three-dimensional terrain model in fig. 2 is subjected to a dicing process to obtain a plurality of virtual terrain blocks as shown in fig. 3, and each white rectangular box in fig. 3 represents one virtual terrain block.
And step S104, determining the position information and the visual field information of the virtual character in the game scene.
In step S104, the visual field information represents area information of an area visible to the game player in the game scene, where the visual field information may be a projection of a spherical visual field range corresponding to the game scene on a horizontal plane.
And S106, when the minimum distance between the position information and the edges of the virtual three-dimensional terrain model is smaller than a preset distance, selecting a target terrain block to be displayed from the virtual terrain blocks based on the visual field information.
It should be noted that, in step S106, the virtual three-dimensional terrain model may include a plurality of boundaries, for example, in the schematic diagram shown in fig. 2, the virtual three-dimensional terrain model is a quadrilateral, and includes four sides, when the distance between the position of the virtual character and the plurality of sides of the virtual three-dimensional terrain model is smaller than the preset distance, it indicates that the virtual character is very close to the boundary of the virtual three-dimensional terrain model, if the virtual character continues to move to the boundary, the area visible to the game player in the game scene will exceed the area of the virtual three-dimensional terrain model, at this time, the game terminal moves or copies the target terrain block, so that when the virtual character continues to move to the boundary, the area visible to the game player in the game scene can still display the virtual three-dimensional terrain model, thereby implementing a seamless circular display of the game scene.
In addition, the target terrain blocks are used for connecting the boundaries of the virtual three-dimensional terrain model, so that the game scene can be displayed in a seamless and circular manner. The target terrain block may also be determined from the orientation and/or direction of movement of the virtual character, for example, the target terrain block may be a terrain block at a boundary of the virtual three-dimensional terrain model that is opposite to the orientation and/or direction of movement of the virtual character.
In step S106, a position to be displayed of a target terrain block may be determined according to the position information of the virtual character, and a target terrain block among the plurality of virtual terrain blocks may be determined according to the visual field information and the position information of each virtual terrain block.
And S108, displaying the target terrain block in a preset area, wherein the preset area is an area outside the area corresponding to the virtual three-dimensional terrain model or the area corresponding to the virtual three-dimensional terrain model.
Optionally, after the target terrain block and the position to be displayed of the target terrain block are determined in step S106, the game terminal displays the target terrain block at the position to be displayed when the game is running, wherein the position to be displayed is located in the preset area. It should be noted that in the scheme, the instant movement of the camera or the virtual character is not performed, and only the terrain blocks to be displayed in the plurality of virtual terrain blocks in the virtual three-dimensional terrain model are displayed at the positions to be displayed, so that the seamless connection of the game scenes is realized.
Based on the solutions defined in steps S102 to S108, it can be known that, in the embodiment of the present invention, after a virtual three-dimensional terrain model including a plurality of virtual terrain blocks in a game scene is acquired by using a manner of achieving a seamless circular scene effect according to position information and view information of a virtual character, the position information and the view information of the virtual character in the game scene are determined, and then, when a minimum distance between the position information and a plurality of edges of the virtual three-dimensional terrain model is smaller than a preset distance, a target terrain block to be displayed is selected from the plurality of virtual terrain blocks based on the view information, and the target terrain block is displayed in an area outside an area corresponding to the virtual three-dimensional terrain model or in an area corresponding to the virtual three-dimensional terrain model.
It is easy to notice that in the above process, the target terrain block to be displayed is determined according to the information of the virtual character and the visual field, and the target terrain block to be displayed is displayed in the area outside the virtual three-dimensional terrain model, so that the effect of seamless and circular display of the game scene can be realized. In addition, the process only realizes seamless connection of the game scenes through the target terrain blocks, and transient movement is not performed on the game characters or the cameras, so that transient change caused by transient movement of the virtual characters or the cameras is avoided. In addition, the scheme reduces the requirement on the scale of the game scene and reduces the manufacturing cost of the game scene.
Therefore, the scheme provided by the application achieves the purpose of seamless circular display of the game scene, so that the technical effect of improving the game experience of game players is achieved, and the technical problem of poor display effect existing in the circular display of the game scene in the prior art is solved.
In an optional embodiment, after acquiring the virtual three-dimensional terrain model in the game scene, the game terminal acquires a model position of the virtual model included in the virtual three-dimensional terrain model, and divides the virtual three-dimensional terrain model into a plurality of virtual terrain blocks according to the model position.
It should be noted that the virtual model in the virtual three-dimensional terrain model may be a static model, for example, a tree, a stone, a building, etc.
Optionally, the game terminal maps the virtual three-dimensional terrain model into a two-dimensional terrain model, projects a preset dicing range into the two-dimensional terrain model, performs dicing processing on the two-dimensional terrain model according to the dicing range based on a dicing tool (for example, World Composition tool), projects the virtual model onto a two-dimensional plane corresponding to the two-dimensional terrain model, determines a virtual terrain block to which the virtual model belongs, and divides the virtual model into corresponding virtual terrain blocks.
In the above process, the preset cutting range may be set according to the size of the two-dimensional terrain model. In addition, in the process of performing the dicing process on the two-dimensional terrain model, the dicing shape and the dicing size corresponding to each virtual terrain block may be set according to the size of the two-dimensional terrain model, and the dicing shape and the dicing size corresponding to each virtual terrain block may be the same or different. Preferably, in the present application, the cut piece shape and the cut piece size corresponding to each virtual terrain piece are the same. For example, the size and shape of the plurality of virtual terrain blocks in FIG. 3 are all the same.
In addition, after the virtual three-dimensional terrain model is subjected to the dicing processing to obtain a plurality of virtual terrain blocks, the game terminal can obtain the dicing meta-information of each virtual terrain block. The cutting meta-information of each virtual terrain block includes, but is not limited to, position information of each virtual terrain block, a bounding box size corresponding to each virtual terrain block, and a scene file corresponding to each virtual terrain block. The position information of each virtual terrain block may be an anchor point position of each virtual terrain block, for example, a position of a geometric center point of each virtual terrain block.
In an alternative embodiment, before selecting the target terrain block to be displayed from the plurality of virtual terrain blocks based on the visual field information, the game terminal first determines a bounding box corresponding to the game scene. Specifically, the game terminal obtains a first bounding box corresponding to each virtual model in the game scene, and performs union operation on the first bounding boxes corresponding to all the virtual models to obtain a second bounding box corresponding to the game scene.
It should be noted that a bounding box is a geometric body with a slightly larger volume and simple characteristics, and in practical applications, the bounding box is generally used to approximately replace a complex geometric object, and in the present application, the bounding box is used to replace a complex game scene and a virtual model.
In addition, it should be noted that, since the game scene is composed of the virtual models of the game scene, in the present application, the game terminal may perform union calculation on all the bounding boxes of the virtual models in the entire game scene, so as to obtain a bounding box (i.e., a second bounding box) corresponding to the entire game scene.
In an alternative embodiment, after selecting a target terrain block to be displayed from the plurality of virtual terrain blocks based on the field-of-view information, the game terminal may further determine a position to be displayed at which the target terrain block is displayed. Specifically, the game terminal acquires offset positions corresponding to a plurality of preset areas mapped by a target terrain block, calculates the distance between the position information of the virtual character and the offset positions, determines a target offset position corresponding to the minimum distance, and determines the preset area where the target offset position is located as a position to be displayed corresponding to the target terrain block. The preset areas comprise an area corresponding to the virtual three-dimensional terrain model and other areas which are located outside the area corresponding to the virtual three-dimensional terrain model and connected with the boundary of the virtual three-dimensional terrain model, namely the preset areas comprise an area corresponding to the virtual three-dimensional terrain model where the virtual character is located and eight areas connected with the boundary of the virtual three-dimensional terrain model.
Optionally, the game terminal first obtains the offset position of the target terrain block mapped to each preset area, where the position information of the target terrain block in each preset area is the same, for example, the position coordinate of the target terrain block in the virtual three-dimensional terrain model is (x, y), and the corresponding position coordinate after the target terrain block is mapped to the area above the virtual three-dimensional terrain model is (x, y). The game terminal can calculate the offset position of the target terrain block mapped to each preset area according to the position coordinates of the target terrain block in the preset area and the offset position corresponding to each preset position.
Specifically, the game terminal acquires a first offset position corresponding to each preset area in the plurality of preset areas and area information corresponding to each preset area, acquires first position information of a target terrain block in the virtual three-dimensional terrain model, calculates a product of the first offset position and the area information to obtain a first result, calculates a sum of the first result and the first position information, and obtains an offset position mapped to each preset area by the target terrain block. For example, if the offset position of the virtual three-dimensional terrain model where the virtual character is located is (0,0), the offset position corresponding to the preset area located above the virtual three-dimensional terrain model is (0,1), the offset position corresponding to the preset area located on the right side of the virtual three-dimensional terrain model is (1,0), and so on, the first offset position corresponding to each preset area is obtained. Then, calculating a product of a first offset position corresponding to each preset area and area information to obtain a first result, namely multiplying the length and width corresponding to each preset area by an abscissa value and an ordinate value of the first offset position respectively, for example, for a preset area located above the virtual three-dimensional terrain model, the area information is (a, b), and the first result is (0, b); for the preset area located at the right side of the virtual three-dimensional terrain model, the area information is (a, b), and the first result is (0, b). Finally, calculating the sum of the first result and the first position information to obtain the offset position of the target terrain block mapped to each preset area, for example, for the preset area above the virtual three-dimensional terrain model, the offset position of the target terrain block in the preset area is (x, y + b); for a preset area located on the right side of the virtual three-dimensional terrain model, the corresponding offset position of the target terrain block in the preset area is (x + a, y).
Further, after the offset position is determined, the game terminal determines, from the virtual three-dimensional terrain model and eight areas around the virtual three-dimensional terrain model, in which area the target terrain block is placed, so that the distance between the position to be displayed and the current position of the game character is the smallest, that is, when the distance between the position to be displayed and the current position of the game character is the smallest, the position to be displayed is determined in the virtual three-dimensional terrain model in which the target terrain block is located.
In an alternative embodiment, after determining the position of the target terrain block to be displayed, the game terminal selects the target terrain block to be displayed from the plurality of virtual terrain blocks based on the visual field information. Specifically, the game terminal acquires a view range corresponding to the view information, and determines a virtual terrain block which does not intersect with the view range from among a plurality of virtual terrain blocks included in the first bounding box, so as to obtain a target terrain block.
Optionally, the game terminal calculates an intersection between the field of view and a bounding box (i.e. a first bounding box) corresponding to each virtual terrain block, and if the current virtual terrain block intersects with the field of view, determines that the current virtual terrain block is a visible terrain block; if the current virtual terrain block is disjoint from the field of view, the current virtual terrain block is determined to be an invisible terrain block. Then, the game terminal determines a target terrain block from among the invisible terrain blocks or among the visible terrain blocks according to the position information of the virtual character.
In an alternative embodiment, when the scene range corresponding to the game scene is larger than the visual field range, after determining the position information of the virtual character, the game terminal determines a target terrain block from among the invisible terrain blocks in combination with the orientation and/or moving direction of the virtual character. For example, in the game scene diagram shown in fig. 4, the position of the virtual character is M, and the virtual character moves to the right side, and at this time, the game terminal may determine the target terrain block from all the virtual terrain blocks on the left side of the virtual terrain block corresponding to the position M. For example, the virtual terrain blocks a and B are determined as target terrain blocks in fig. 4.
In another alternative embodiment, when the scene range corresponding to the game scene is not greater than the visual field range, after determining the position information of the virtual character, the game terminal determines a target terrain block from the visible terrain blocks in combination with the orientation and/or the moving direction of the virtual character. For example, in the game scene diagram shown in fig. 5, the position of the virtual character is M, and the virtual character moves to the right side, and at this time, the game terminal may determine the target terrain block from all the virtual terrain blocks on the left side of the virtual terrain block corresponding to the position M.
In an alternative embodiment, after determining the target terrain piece and the position to be displayed for the target terrain piece, the gaming terminal may display the target terrain piece at the position to be displayed. Specifically, the game terminal compares the visual field range with the size of the bounding box range of the second bounding box to obtain a comparison result, determines a setting mode for setting the target terrain block on the position to be displayed according to the comparison result, and then displays the target terrain block on the position to be displayed.
It should be noted that the setting modes include two modes, namely a moving target terrain block and a copying target terrain block, and different comparison results correspond to different setting modes.
Optionally, in the case that the field of view is smaller than or equal to the bounding box range, the game terminal moves the target terrain block to the position to be displayed. For example, in the game scene diagram shown in fig. 4, the visual field range is smaller than the bounding box range, the virtual terrain blocks a and B are target terrain blocks, and a 'and B' are positions to be displayed corresponding to the virtual terrain blocks a and B, respectively. In the scene, the game terminal directly moves the virtual terrain blocks A and B to positions A 'and B' to be displayed, then displays or loads a target terrain block on the position to be displayed, and hides or unloads the virtual terrain blocks on the corresponding original positions of the virtual terrain blocks A and B.
It should be noted that, in the case where the field of view is less than or equal to the bounding box range, if the copy operation is performed on the target terrain block, a problem of repeated occurrence of the target terrain block may be caused, and therefore, in the present embodiment, in the case where the field of view is less than or equal to the bounding box range, the move operation is performed on the target terrain block.
Optionally, in the case that the field of view is larger than the bounding box, the target terrain block is copied to the position to be displayed. For example, in the game scene diagram shown in fig. 5, the target terrain block C is not instantiated, and the field of view is larger than the bounding box range, at this time, the game terminal instantiates the target terrain block C and copies the target terrain block C to the position to be displayed D.
In addition, after copying the target terrain block to the position to be displayed, the game terminal acquires a third bounding box formed by the virtual three-dimensional terrain model and the copied target terrain block, determines a virtual terrain block out of the visual field range from the third bounding box, and destroys the virtual terrain block out of the visual field range. Namely, the destruction operation is carried out on the instantiated virtual terrain block which is out of the visual field.
It should be noted that, through the above operations, seamless circular display of a game scene can be achieved, for example, in the game scene shown in fig. 6, according to the scheme provided by the present application, when a virtual character moves to a boundary of the game scene, a game player cannot perceive inconsistency when the virtual character crosses the boundary, and problems caused by a transient camera or the virtual character are avoided. Moreover, the scheme provided by the application has no interference on the production of the game scene, has low requirement on the size of the game scene, can realize seamless circular display of the game scene by using the existing game engine and tool, has high development efficiency, avoids the problem of a spherical scene, enables the game scene to be more continuous, and improves the game experience of game players.
Example 2
According to an embodiment of the present invention, there is also provided an embodiment of a display apparatus for a game scene, where fig. 7 is a schematic diagram of the display apparatus for a game scene according to the embodiment of the present invention, and as shown in fig. 7, the apparatus includes: an acquisition module 701, a first determination module 703, a second determination module 705, and a display module 707.
The acquiring module 701 is configured to acquire a virtual three-dimensional terrain model in a game scene, where the virtual three-dimensional terrain model is composed of a plurality of virtual terrain blocks; a first determining module 703, configured to determine position information and view information of a virtual character in a game scene; a second determining module 705, configured to select a target terrain block to be displayed from the plurality of virtual terrain blocks based on the view information when a minimum distance between the position information and a plurality of edges of the virtual three-dimensional terrain model is smaller than a preset distance; and a display module 707, configured to display the target terrain block in a preset area, where the preset area is an area outside an area corresponding to the virtual three-dimensional terrain model or an area corresponding to the virtual three-dimensional terrain model.
It should be noted that the obtaining module 701, the first determining module 703, the second determining module 705, and the displaying module 707 correspond to steps S102 to S108 in the foregoing embodiment, and the four modules are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in the foregoing embodiment.
Optionally, the display device for the game scene further includes: the device comprises a first obtaining module and a dividing module. The first obtaining module is used for obtaining a model position of a virtual model contained in a virtual three-dimensional terrain model after obtaining the virtual three-dimensional terrain model in a game scene; and the dividing module is used for dividing the virtual three-dimensional terrain model into a plurality of virtual terrain blocks according to the model position.
Optionally, the display device for the game scene further includes: the device comprises a second acquisition module and a first processing module. The second acquisition module is used for acquiring a first enclosure box corresponding to each virtual model in the game scene before selecting a target terrain block to be displayed from the plurality of virtual terrain blocks based on the visual field information; and the first processing module is used for performing union operation on the first bounding boxes corresponding to all the virtual models to obtain a second bounding box corresponding to the game scene.
Optionally, the display device for the game scene further includes: the device comprises a third acquisition module, a first calculation module, a third determination module and a fourth determination module. The third acquisition module is used for acquiring offset positions corresponding to a plurality of preset areas mapped by a target terrain block after the target terrain block to be displayed is selected from the plurality of virtual terrain blocks based on the visual field information, wherein the plurality of preset areas comprise areas corresponding to the virtual three-dimensional terrain models and other areas which are positioned outside the areas corresponding to the virtual three-dimensional terrain models and are connected with the boundaries of the virtual three-dimensional terrain models; the first calculation module is used for calculating the distance between the position information and the offset position of the virtual role; the third determining module is used for determining a target offset position corresponding to the minimum distance; and the fourth determining module is used for determining the preset area where the target offset position is located as the position to be displayed corresponding to the target terrain block.
Optionally, the third obtaining module includes: the device comprises a sixth acquisition module, seventh acquisition information, a second calculation module and a third calculation module. The sixth obtaining module is configured to obtain a first offset position corresponding to each preset region in the plurality of preset regions and region information corresponding to each preset region, where the region information at least includes length information and width information corresponding to each preset region; seventh acquiring information, which is used for acquiring first position information of the target terrain block in the virtual three-dimensional terrain model; the second calculation module is used for calculating the product of the first offset position and the area information to obtain a first result; and the third calculation module is used for calculating the sum of the first result and the first position information to obtain the offset position mapped to each preset area by the target terrain block.
Optionally, the second determining module includes: a fourth obtaining module and a sixth determining module. The fourth acquisition module is used for acquiring a visual field range corresponding to the visual field information; and the sixth determining module is used for determining a virtual ground block which does not intersect with the visual field range from the plurality of virtual ground blocks contained in the first enclosure box to obtain a target ground block.
Optionally, the display module includes: the device comprises a comparison module, a setting module and a first display module. The comparison module is used for comparing the visual field range with the bounding box range of the second bounding box to obtain a comparison result; the setting module is used for determining a setting mode for setting the target terrain block on the position to be displayed according to the comparison result; the first display module is used for displaying the target terrain block on the position to be displayed.
Optionally, the setting module includes: a second processing module and a third processing module. The second processing module is used for moving the target terrain block to the position to be displayed under the condition that the visual field range is smaller than or equal to the range of the bounding box; and the third processing module is used for copying the target terrain block to the position to be displayed under the condition that the visual field range is larger than the range of the bounding box.
Optionally, the display device for the game scene further includes: the device comprises a fifth acquisition module, a seventh determination module and a fourth processing module. The third acquisition module is used for acquiring a third bounding box formed by the virtual three-dimensional terrain model and the copied target terrain block after copying the target terrain block to the position to be displayed; a seventh determining module for determining a virtual terrain block outside the field of view from the third bounding box; and the fourth processing module is used for destroying the virtual terrain blocks out of the visual field range.
Example 3
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium, in which a computer program is stored, wherein the computer program is configured to execute the method for presenting a game scene in embodiment 1 when running.
Example 4
According to another aspect of the embodiments of the present invention, there is also provided a processor for executing a program, wherein the program is configured to execute the method for presenting a game scene in embodiment 1 when running.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (13)

1. A method for displaying a game scene is characterized by comprising the following steps:
acquiring a virtual three-dimensional terrain model in a game scene, wherein the virtual three-dimensional terrain model is composed of a plurality of virtual terrain blocks;
determining position information and visual field information of a virtual character in the game scene;
when the minimum distance between the position information and the edges of the virtual three-dimensional terrain model is smaller than a preset distance, selecting a target terrain block to be displayed from the virtual terrain blocks on the basis of the view information;
and displaying the target terrain block in a preset area, wherein the preset area is an area outside the area corresponding to the virtual three-dimensional terrain model or the area corresponding to the virtual three-dimensional terrain model.
2. The method of claim 1, wherein after acquiring the virtual three-dimensional terrain model in the game scene, the method further comprises:
obtaining a model position of a virtual model contained in the virtual three-dimensional terrain model;
dividing the virtual three-dimensional terrain model into the plurality of virtual terrain blocks according to the model positions.
3. The method of claim 1, wherein prior to selecting a target terrain block to be displayed from the plurality of virtual terrain blocks based on the horizon information, the method further comprises:
acquiring a first enclosure box corresponding to each virtual terrain block in the game scene;
and performing union operation on the first bounding boxes corresponding to all the virtual terrain blocks to obtain a second bounding box corresponding to the game scene.
4. The method of claim 3, wherein after selecting a target terrain block to be displayed from the plurality of virtual terrain blocks based on the horizon information, the method further comprises:
acquiring offset positions corresponding to the target terrain block mapped to a plurality of preset areas, wherein the plurality of preset areas comprise areas corresponding to the virtual three-dimensional terrain model and other areas which are positioned outside the areas corresponding to the virtual three-dimensional terrain model and connected with the boundary of the virtual three-dimensional terrain model;
calculating a distance between the position information of the virtual character and the offset position;
determining a target offset position corresponding to the minimum distance;
and determining a preset area where the target offset position is located as a position to be displayed corresponding to the target terrain block.
5. The method of claim 4, wherein obtaining the offset locations corresponding to the mapping of the target terrain patch to the plurality of preset areas comprises:
acquiring a first offset position corresponding to each preset area in a plurality of preset areas and area information corresponding to each preset area, wherein the area information at least comprises length information and width information corresponding to each preset area;
acquiring first position information of the target terrain block in the virtual three-dimensional terrain model;
calculating the product of the first offset position and the area information to obtain a first result;
and calculating the sum of the first result and the first position information to obtain an offset position corresponding to the target terrain block mapped to each preset area.
6. The method of claim 4, wherein selecting a target terrain block to be displayed from the plurality of virtual terrain blocks based on the horizon information comprises:
acquiring a visual field range corresponding to the visual field information;
determining a virtual terrain block which does not intersect with the field of view range from a plurality of virtual terrain blocks contained in the first bounding box, and obtaining the target terrain block.
7. The method of claim 6, wherein displaying the target terrain block within a preset area comprises:
comparing the visual field range with the bounding box range of the second bounding box to obtain a comparison result;
determining a setting mode for setting the target terrain block on the position to be displayed according to the comparison result;
and displaying the target terrain block on the position to be displayed.
8. The method of claim 7, wherein determining a setting manner for setting the target terrain block on the position to be displayed according to the comparison result comprises:
and moving the target terrain block to the position to be displayed under the condition that the comparison result represents that the visual field range is smaller than or equal to the bounding box range.
9. The method of claim 7, wherein determining a setting manner for setting the target terrain block on the position to be displayed according to the comparison result comprises:
and under the condition that the comparison result represents that the visual field range is larger than the bounding box range, copying the target terrain block to the position to be displayed.
10. The method of claim 9, wherein after copying the target terrain block onto the location to be displayed, the method further comprises:
acquiring a third enclosure box consisting of the virtual three-dimensional terrain model and the copied target terrain block;
determining a virtual terrain volume from the third bounding box that is outside the field of view;
and carrying out destruction operation on the virtual terrain block which is out of the visual field range.
11. A display device for a game scene, comprising:
the game system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a virtual three-dimensional terrain model in a game scene, and the virtual three-dimensional terrain model consists of a plurality of virtual terrain blocks;
the first determining module is used for determining the position information and the visual field information of the virtual character in the game scene;
a second determining module, configured to select a target terrain block to be displayed from the plurality of virtual terrain blocks based on the view information when a minimum distance between the position information and a plurality of edges of the virtual three-dimensional terrain model is smaller than a preset distance;
and the display module is used for displaying the target terrain block in a preset area, wherein the preset area is an area outside the area corresponding to the virtual three-dimensional terrain model or the area corresponding to the virtual three-dimensional terrain model.
12. A non-volatile storage medium, wherein a computer program is stored in the non-volatile storage medium, wherein the computer program is configured to execute the method for presenting a game scene according to any one of claims 1 to 10 when the computer program runs.
13. A processor for executing a program, wherein the program is configured to execute the method for presenting a game scene according to any one of claims 1 to 10.
CN202110528116.2A 2021-05-14 2021-05-14 Game scene display method and device Pending CN113289334A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110528116.2A CN113289334A (en) 2021-05-14 2021-05-14 Game scene display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110528116.2A CN113289334A (en) 2021-05-14 2021-05-14 Game scene display method and device

Publications (1)

Publication Number Publication Date
CN113289334A true CN113289334A (en) 2021-08-24

Family

ID=77322330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110528116.2A Pending CN113289334A (en) 2021-05-14 2021-05-14 Game scene display method and device

Country Status (1)

Country Link
CN (1) CN113289334A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108273265A (en) * 2017-01-25 2018-07-13 网易(杭州)网络有限公司 The display methods and device of virtual objects
CN108771866A (en) * 2018-05-29 2018-11-09 网易(杭州)网络有限公司 Virtual object control method in virtual reality and device
CN109847354A (en) * 2018-12-19 2019-06-07 网易(杭州)网络有限公司 The method and device of virtual lens control in a kind of game
US20200316466A1 (en) * 2017-12-29 2020-10-08 Netease (Hangzhou) Network Co., Ltd. Information Processing Method and Apparatus, Mobile Terminal, and Storage Medium
CN111773699A (en) * 2020-07-20 2020-10-16 网易(杭州)网络有限公司 Deformation method and device for terrain
CN112076474A (en) * 2020-09-27 2020-12-15 网易(杭州)网络有限公司 Information display method and device
CN112190944A (en) * 2020-10-21 2021-01-08 网易(杭州)网络有限公司 Virtual building model construction method and device and electronic device
CN112530012A (en) * 2020-12-24 2021-03-19 网易(杭州)网络有限公司 Virtual earth surface processing method and device and electronic device
CN112604280A (en) * 2020-12-29 2021-04-06 珠海金山网络游戏科技有限公司 Game terrain generating method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108273265A (en) * 2017-01-25 2018-07-13 网易(杭州)网络有限公司 The display methods and device of virtual objects
US20200316466A1 (en) * 2017-12-29 2020-10-08 Netease (Hangzhou) Network Co., Ltd. Information Processing Method and Apparatus, Mobile Terminal, and Storage Medium
CN108771866A (en) * 2018-05-29 2018-11-09 网易(杭州)网络有限公司 Virtual object control method in virtual reality and device
CN109847354A (en) * 2018-12-19 2019-06-07 网易(杭州)网络有限公司 The method and device of virtual lens control in a kind of game
WO2020124839A1 (en) * 2018-12-19 2020-06-25 网易(杭州)网络有限公司 Method and device for controlling virtual lens in game
CN111773699A (en) * 2020-07-20 2020-10-16 网易(杭州)网络有限公司 Deformation method and device for terrain
CN112076474A (en) * 2020-09-27 2020-12-15 网易(杭州)网络有限公司 Information display method and device
CN112190944A (en) * 2020-10-21 2021-01-08 网易(杭州)网络有限公司 Virtual building model construction method and device and electronic device
CN112530012A (en) * 2020-12-24 2021-03-19 网易(杭州)网络有限公司 Virtual earth surface processing method and device and electronic device
CN112604280A (en) * 2020-12-29 2021-04-06 珠海金山网络游戏科技有限公司 Game terrain generating method and device

Similar Documents

Publication Publication Date Title
CN107358649B (en) Processing method and device of terrain file
US20170154468A1 (en) Method and electronic apparatus for constructing virtual reality scene model
EP3501012B1 (en) System and method for procedurally generated object distribution in regions of a three-dimensional virtual environment
CN113658316B (en) Rendering method and device of three-dimensional model, storage medium and computer equipment
CN107638690A (en) Method, device, server and medium for realizing augmented reality
Kolivand et al. Cultural heritage in marker-less augmented reality: A survey
US20220016526A1 (en) Method and Apparatus for Vertex Reconstruction based on Terrain Cutting, Processor and Terminal
CN109395387A (en) Display methods, device, storage medium and the electronic device of threedimensional model
CN108230430B (en) Cloud layer mask image processing method and device
CN115512025A (en) Method and device for detecting model rendering performance, electronic device and storage medium
TWI780995B (en) Image processing method, equipment, computer storage medium
US20130249914A1 (en) Method for Manipulating Three-Dimensional Voxel Data for On-Screen Visual
CN111494945A (en) Virtual object processing method and device, storage medium and electronic equipment
CN112190937A (en) Illumination processing method, device, equipment and storage medium in game
KR101680174B1 (en) Method for generation of coloring design using 3d model, recording medium and device for performing the method
CN113289334A (en) Game scene display method and device
CN116363324A (en) Two-dimensional and three-dimensional integrated rendering method for situation map
CN111462343B (en) Data processing method and device, electronic equipment and storage medium
CN109062416B (en) Map state conversion method and device
JP5616198B2 (en) Method and apparatus for generating appearance display image of same feature having different level of detail
CN111467800A (en) Fusion method and device of virtual three-dimensional model
CN113599818B (en) Vegetation rendering method and device, electronic equipment and readable storage medium
CN116617658B (en) Image rendering method and related device
CN117115805B (en) Random irregular object identification method and device under Unreal Engine platform
CN116977538A (en) Image rendering method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240201

Address after: No. 8011, No. 905, Ziyun District, Guangzhou, Guangdong Province

Applicant after: GUANGZHOU BOGUAN INFORMATION SCIENCE & TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: 310000 7 storeys, Building No. 599, Changhe Street Network Business Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: NETEASE (HANGZHOU) NETWORK Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right