CN111773709B - Scene map generation method and device, computer storage medium and electronic equipment - Google Patents

Scene map generation method and device, computer storage medium and electronic equipment Download PDF

Info

Publication number
CN111773709B
CN111773709B CN202010819236.3A CN202010819236A CN111773709B CN 111773709 B CN111773709 B CN 111773709B CN 202010819236 A CN202010819236 A CN 202010819236A CN 111773709 B CN111773709 B CN 111773709B
Authority
CN
China
Prior art keywords
map
scene
spliced
content data
base units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010819236.3A
Other languages
Chinese (zh)
Other versions
CN111773709A (en
Inventor
黄春昊
谢冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010819236.3A priority Critical patent/CN111773709B/en
Publication of CN111773709A publication Critical patent/CN111773709A/en
Application granted granted Critical
Publication of CN111773709B publication Critical patent/CN111773709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to the technical field of games, and provides a method and a device for generating a scene map, a computer storage medium and electronic equipment, wherein the method comprises the following steps: acquiring a map configuration table; loading more than two preset map base units according to a first scene area captured by the virtual camera in the scene map, and splicing the more than two map base units in the first scene area to obtain a first spliced map; according to the positions of the map foundation units in the first spliced map in the scene map, map content data corresponding to the map foundation units in the first spliced map are obtained from a map configuration table; and respectively loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic unit so as to obtain a first target scene map for display. The method and the device can reduce the loading time of the target scene map and save the system loss.

Description

Scene map generation method and device, computer storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of game technologies, and in particular, to a scene map generating method, a scene map generating device, a computer readable storage medium, and an electronic apparatus.
Background
With the development of the game industry, in order to provide a game player with a good game experience, the creation of a game scene map is gaining more and more attention to game developers. While the game developer pursues the integrity and clarity of the game map, the game developer also faces a plurality of problems, such as too long time consumption and excessive memory occupation caused by limited processing capacity of the device memory and the CPU when the scene map is used.
The prior art adopts a method for loading a scene large map in a blocking way, specifically, the map is divided into a plurality of blocks according to different space regions of a game scene, and when the scene large map is loaded, only the scene blocks which can be seen by a virtual camera in the game are loaded. However, this method requires the art personnel to make the whole game scene first, and is inefficient, and when the game scene needs to be modified, the modification process is complicated.
In view of this, there is a need in the art to develop a new scene map generation method and apparatus.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure aims to provide a method for generating a scene map, a device for generating a scene map, a computer-readable storage medium, and an electronic device, and further to improve the efficiency of generating a scene map at least to some extent.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to an aspect of the present disclosure, there is provided a method of generating a scene map, the method including: acquiring a map configuration table, wherein the map configuration table comprises a plurality of positions of a scene map and map content data corresponding to each position; loading more than two preset map base units according to a first scene area captured by a virtual camera in the scene map, and splicing the more than two map base units in the first scene area to obtain a first spliced map; acquiring map content data corresponding to each map basic unit in the first spliced map from the map configuration table according to the position of each map basic unit in the first spliced map in the scene map; and respectively loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic unit so as to obtain a first target scene map for display.
In some exemplary embodiments of the present disclosure, the loading more than two preset map base units according to a first scene area captured by a virtual camera in the scene map includes: determining the target number of the preset map basic units to be loaded according to the size of a first scene area captured by the virtual camera in the scene map and the size of the preset map basic units; loading the target number of the map base units.
In some exemplary embodiments of the present disclosure, the stitching the two or more map base units in the first scene area forms a first stitched map, including: and starting from the position of the origin of the space coordinate system corresponding to the first scene area, sequentially splicing the loaded map base units with the target number in the first scene area to obtain a first spliced map.
In some exemplary embodiments of the present disclosure, after the stitching the two or more map base units in the first scene area to obtain a first stitched map, the method further includes: and acquiring the position of each map base unit in the first spliced map in the scene map.
In some exemplary embodiments of the present disclosure, the preset map base unit is a map containing a preset number of patch models.
In some exemplary embodiments of the disclosure, the loading more than two preset map base units includes: more than two map base units are loaded from a cache pool, wherein the cache pool is used for storing a preset number of map base units.
In some exemplary embodiments of the present disclosure, the map configuration table includes two or more sub-map configuration tables, and the map content data includes two or more kinds of map content data; the two or more sub map configuration tables include a plurality of positions of the scene map and map content data of a target category corresponding to each of the positions, respectively.
In some exemplary embodiments of the present disclosure, the method further comprises: controlling the virtual camera to switch the lens and determining a second scene area captured in the game scene after the virtual camera is switched in response to a lens switching instruction for the virtual camera; loading more than two preset map base units according to a second scene area captured in the game scene after the quasi-camera is switched, and splicing more than two map base units in the second scene area to obtain a second spliced map; acquiring map content data corresponding to each map basic unit in the second spliced map from the map configuration table according to the position of each map basic unit in the second spliced map in the scene map; and respectively loading the map content data corresponding to each map basic unit in the second spliced map into the corresponding map basic unit so as to obtain a second target scene map for display.
In some exemplary embodiments of the present disclosure, after the obtaining the second target scene map for display, the method further includes: and displaying the second target scene map and deleting the first target scene map.
In some exemplary embodiments of the present disclosure, the deleting the first target scene map includes: deleting the map content data in the first target scene map, and moving the map base unit corresponding to the first target scene map into a cache pool for storage.
In some exemplary embodiments of the present disclosure, the map content data includes at least one of relief height data, relief texture data, river region data, and building model data.
According to one aspect of the present disclosure, there is provided a generation apparatus of a scene map, the generation apparatus of the scene map including: the system comprises an acquisition configuration module, a storage module and a storage module, wherein the acquisition configuration module is used for acquiring a map configuration table, and the map configuration table comprises a plurality of positions of a scene map and map content data corresponding to the positions; the splicing map module is used for loading more than two preset map basic units according to a first scene area captured by the virtual camera in the scene map, and splicing the more than two map basic units in the first scene area to obtain a first splicing map; the data acquisition module is used for acquiring map content data corresponding to each map basic unit in the first spliced map from the map configuration table according to the position of each map basic unit in the first spliced map in the scene map; and the map determining module is used for respectively loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic unit so as to obtain a first target scene map for display.
According to an aspect of the present disclosure, there is provided a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements a method of generating a scene map as described in the above embodiments.
According to one aspect of the present disclosure, there is provided an electronic device including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of generating a scene map as described in the above embodiments.
As can be seen from the above technical solutions, the scene map generating method and apparatus, the computer-readable storage medium, and the electronic device in the exemplary embodiments of the present disclosure have at least the following advantages and positive effects:
the method for generating the scene map comprises the steps of obtaining a map configuration table, wherein the map configuration table comprises a plurality of positions of the scene map and map content data corresponding to each position; loading more than two preset map base units according to a first scene area captured by the virtual camera in the scene map, and splicing the more than two map base units in the first scene area to obtain a first spliced map; according to the positions of the map foundation units in the first spliced map in the scene map, map content data corresponding to the map foundation units in the first spliced map are obtained from a map configuration table; and finally, respectively loading the map content data corresponding to each map basic unit in the first spliced map to the corresponding map basic unit so as to obtain a first target scene map for display. According to the method for generating the scene map, on one hand, the target scene map can be generated according to the map configuration table and the map base unit, so that the loading time of the scene map is reduced, the memory occupied by the scene map is reduced, and the system loss is saved; on the other hand, the target scene map can be rendered according to the map configuration table, so that the target scene map can be changed according to the change of the map content data in the map configuration table, the flexible modification of the target scene map can be realized, the workload of map makers is reduced, and the generation efficiency of the target scene map is improved; in still another aspect, dynamic switching of the target scene map can be achieved according to the capture range of the virtual camera.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 schematically illustrates a flow diagram of a method of generating a scene map according to an embodiment of the disclosure;
fig. 2 schematically illustrates a structural schematic of a preset map base unit according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a structural diagram of a first scene region captured according to a virtual camera according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow diagram of a method of generating a scene map in accordance with a specific embodiment of the disclosure;
FIG. 5 schematically illustrates a flow diagram of a dynamic replacement scenario map according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow diagram for updating a target scene map according to a particular embodiment of the present disclosure;
FIG. 7 schematically illustrates a block diagram of a scene map generation apparatus according to an embodiment of the disclosure;
FIG. 8 schematically illustrates a block diagram of an electronic device according to an embodiment of the disclosure;
fig. 9 schematically illustrates a program product schematic according to an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the disclosed aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In the related art in the field, the block loading scene area is mainly adopted. The block loading scene area is specifically that a map is divided into a plurality of block areas according to different space areas in a game scene, and only a scene area which can be seen by a virtual camera in the game is loaded when the game is loaded. However, the technology of loading the scene area by blocks has the disadvantage that the whole game scene is required to be prefabricated by artistic staff, and the modification process of the game scene is complex.
Based on the problems existing in the related art, in one embodiment of the present disclosure, a method for generating a scene map is provided, where the method for generating a scene map may be applied to generation of a game scene or generation of a real map, and the application scene of the method for generating a scene map is not specifically limited, and changes of the application scene should be understood as belonging to the protection scope of the present application. In the embodiment of the disclosure, taking a game scene map as an example for explanation, fig. 1 shows a flow diagram of a scene map generating method, and as shown in fig. 1, the scene map generating method at least includes the following steps:
step S110: acquiring a map configuration table, wherein the map configuration table comprises a plurality of positions of a scene map and map content data corresponding to the positions;
step S120: loading more than two preset map base units according to a first scene area captured by the virtual camera in the scene map, and splicing the more than two map base units in the first scene area to obtain a first spliced map;
step S130: according to the positions of the map foundation units in the first spliced map in the scene map, map content data corresponding to the map foundation units in the first spliced map are obtained from a map configuration table;
Step S140: and respectively loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic unit so as to obtain a first target scene map for display.
According to the scene map generation method, on one hand, the target scene map can be generated according to the map configuration table and the map base unit, so that the loading time of the target scene map is reduced, the memory occupied by the target scene map is reduced, and the system loss is saved; on the other hand, the target scene map can be rendered according to the map configuration table, so that the target scene map can be changed according to the change of the map content data in the map configuration table, the flexible modification of the target scene map can be realized, the workload of map makers is reduced, and the generation efficiency of the target scene map is improved; in still another aspect, dynamic switching of the target scene map can be achieved according to the capture range of the virtual camera.
The method for generating the scene map in the embodiment of the disclosure can be operated on the terminal equipment or the server. The terminal device may be a local terminal device. The data processing method may be a cloud game when running on a server.
In an alternative embodiment, cloud gaming refers to a game style based on cloud computing. In the running mode of the cloud game, a running main body of the game program and a game picture presentation main body are separated, the storage and running of a generating method of the scene map are completed on a cloud game server, and a cloud game client acts on the receiving and sending of data and the presentation of the game picture, for example, the cloud game client can be a display device with a data transmission function, such as a mobile terminal, a television, a computer, a palm computer and the like, which is close to a user side; the terminal device for processing game data is a cloud game server in the cloud. When playing a game, a player operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the cloud game client through a network, and finally decodes the data through the cloud game client and outputs the game pictures.
In an alternative embodiment, the terminal device may be a local terminal device. The local terminal device stores a game program and is used for presenting game pictures. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
In order to make the technical solution of the present disclosure clearer, a method for generating a scene map in the present exemplary embodiment will be described in detail below by taking a terminal device as an example of a local terminal device.
In step S110, a map configuration table is acquired, wherein the map configuration table includes a plurality of positions of a scene map and map content data corresponding to the respective positions.
In an exemplary embodiment of the present disclosure, the scene map is a game map included in a game scene in a game, in which all positions in the game are included, each position corresponding to a set of map content data. The locations in the scene map may be represented in coordinates or in labels, which are not particularly limited by the present disclosure. The map content data corresponding to each location represents a map state at that location.
In an exemplary embodiment of the present disclosure, the kind of the map content data may include at least one of relief height data, relief texture data, river region data, or building model data, which is not particularly limited by the present disclosure.
In an exemplary embodiment of the present disclosure, the map configuration table includes a plurality of positions in the scene map, and further includes map content data corresponding to each position. That is, each position in the scene map can find the map content data corresponding to the position according to the position coordinates or the position index in the map configuration table.
For example, the map configuration table may be stored in a database in the form of a table in which all the positions in the scene map are stored, and the positions may be arranged in the order in the scene map. The map configuration table also stores map content data corresponding to each position, and one position in the map configuration table may correspond to one kind of map content data or may correspond to a plurality of kinds of map content data, which is not particularly limited in the present disclosure.
In an exemplary embodiment of the present disclosure, the map configuration table may include two or more sub-map configuration tables, and the two or more sub-map configuration tables may include a plurality of positions in the scene map and map content data of a target category corresponding to each position, respectively. Wherein the map content data may include two or more kinds of map content data. The map content data of the target category corresponding to each position in the two or more sub map layout tables may be set according to the actual situation, for example, the map content data of the same importance level among the plurality of categories of data may be stored in the same sub map layout table according to the importance of each category of data included in the map content data. Of course, different sub map configuration tables may be configured according to the sizes of the data amounts of different kinds of data, which is not particularly limited in the present disclosure.
For example, the map configuration table includes four sub-map configuration tables, including a first sub-map configuration table, a second sub-map configuration table, a third sub-map configuration table, and a fourth sub-map configuration table. The first sub-map configuration table is used for configuring relief height data, the second sub-map configuration table is used for configuring relief texture data, the third sub-map configuration table is used for configuring river region data, and the fourth sub-map configuration table is used for configuring building model data. Of course, two or three sub-map configuration tables may be further included in the map configuration table, and two, three or more kinds of map content data may be further included in each sub-map configuration table, which is not particularly limited in the present disclosure.
In addition, the map configuration table may further include one or two sub-map configuration tables, and each sub-map configuration table may include one or more kinds of map content data therein. For example, the map configuration table includes a sub-map configuration table including relief height data, relief texture data, river region data, or building model data. For another example, the map configuration table includes two sub-map configuration tables, including a first sub-map configuration table and a second sub-map configuration table. The first sub-map configuration table may include relief height data and relief texture data, and the second sub-map configuration table may include river region data and building model data. Of course, the first map configuration table may include only the relief height data or only the relief texture data, and the second map configuration table may include only the river region data or the building model data, which is not particularly limited in the present disclosure.
In step S120, according to a first scene area captured by the virtual camera in the scene map, loading more than two preset map base units, and stitching the more than two map base units in the first scene area to obtain a first stitched map.
In the exemplary embodiment of the present disclosure, the virtual camera is a special object in the game engine, which can be placed at any corner in the game scene, or the game scene captured in the scene map according to the parameters of the adjustment virtual camera can be the first scene area by changing the position, rotation angle, jing Bie, projection, and other parameters of the virtual camera and acquiring the desired game scene.
In an exemplary embodiment of the present disclosure, the preset map base unit includes a preset number of patch models, where the patch models may be blank planes, map content data on the blank planes are all 0, and the size of the blank planes may be set according to actual situations, and the smaller the blank planes, the higher the accuracy of the obtained target scene map, and the more details are presented. The preset number can be set according to actual conditions, and the larger the preset number is, the higher the accuracy of the obtained target scene map is, and the more details are presented on the target scene map. The present disclosure contrast is not particularly limited. For example, fig. 2 shows a schematic structure of a preset map base unit, and as shown in fig. 2, the preset map base unit 200 is composed of 3*3 patch models 201.
In an exemplary embodiment of the present disclosure, loading more than two preset map base units 200 according to a first scene area captured in a scene map by a virtual camera specifically includes: according to the size of the first scene area captured in the scene map by the virtual camera and the size of the preset map base unit 200, the target number of the preset map base unit 200 to be loaded is determined, and the map base unit 200 of the target number is loaded.
The target number of map base units 200 is loaded according to the size of the first scene area and the size of the map base units 200, for example, if the range size of the first scene area captured by the virtual camera is (50, 50) to (100 ), and the range size of each map base unit 200 is (10, 10), the target number of map base units 200 to be loaded is 5*5 map base units 200.
In the exemplary embodiment of the present disclosure, a preset number of map base units 200, which may be set according to actual conditions, are stored in the cache pool. The preset number of map base units 200 in the buffer pool may be determined according to the size of the maximum scene area captured by the virtual camera, for example, the maximum scene area captured by the virtual camera under a certain viewing angle is obtained, and the preset number is determined according to the size of the maximum scene area and the size of the map base units 200. Of course, the preset number may also be determined according to the number of the maximum number of map base units 200 that can be accommodated in the cache pool, for example, 50×50 map base units 200 may be used, or 100×100 map base units 200 may be used, which is not limited in this disclosure.
For example, fig. 3 shows a schematic diagram of a first scene area captured by a virtual camera, as shown in fig. 3, the map base unit 200 is constructed according to a space coordinate system 302, and the patches of the map base unit 200 are parallel to the xoy plane, and assuming that the virtual camera looks at the scene map at a certain angle, a plane where the map base unit 200 is located and a view cone 301 of the virtual camera intersect to obtain a quadrilateral plane, four vertexes of the quadrilateral plane are (lb, rb, rt, lt), and the quadrilateral plane is a range of the scene area that the virtual camera can capture, and a preset number of map base units 200 required to fill the first scene area can be calculated according to the range size of the quadrilateral plane and the size of the map base unit 200, that is, the preset number of map base units 200 need to be loaded.
In an exemplary embodiment of the present disclosure, more than two preset map base units 200 are loaded, and specifically, more than two map base units 200 may be loaded from a cache pool.
In an exemplary embodiment of the present disclosure, the stitching of more than two map base units 200 in a first scene area to form a first stitched map specifically includes: starting from the position of the origin of the spatial coordinate system 302 corresponding to the first scene area, the map base units 200 with the loaded target number are spliced in the first scene area in sequence to obtain a first spliced map.
The spatial coordinate system 302 is established according to the starting point of the entire scene map in the game as the origin, where the starting point of the scene map may be the point with the largest position coordinate in the scene map or the point with the smallest position coordinate in the scene map, which is not specifically limited in the present disclosure. Of course, the spatial coordinate system 302 may be established according to the vertex of the first scene area as the origin, and the vertex of the first scene area may be the point with the smallest position coordinate or the point with the largest position coordinate in the first scene area, which is not specifically limited in the present disclosure.
In addition, as shown in fig. 3, the spatial coordinate system 302 may be a three-dimensional cartesian coordinate system, and the planes of the x-axis and the y-axis of the spatial coordinate system 302 may be used to represent the plane size of the first scene area, and the z-axis of the spatial coordinate system 302 may be used to represent the topography of the first scene area.
Taking the point with the smallest position coordinate in the first scene area as the origin, the present disclosure takes the space coordinate system 302 as an example, starting from the origin of the space coordinate system 302, starting to progressively lay the map base unit 200 along the x-axis of the space coordinate system 302 until the x-axis in the first scene area is full, starting to progressively lay the map base unit 200 along the y-axis of the space coordinate system 302 until the entire first scene area is full by the map base unit 200. The map base unit 200 may also be incrementally deposited along the y-axis and then incrementally deposited along the x-axis within the first scene area, as this disclosure is not particularly limited.
Of course, the plane in which the y-axis and the z-axis of the spatial coordinate system 302 are located may be used to represent the plane size of the first scene area, and the x-axis of the spatial coordinate system 302 may be used to represent the topography of the first scene area. The present disclosure is not particularly limited thereto.
In an exemplary embodiment of the present disclosure, after the splicing of two or more map base units 200 in the first scene area to obtain the first spliced map, the position of each map base unit 200 in the first spliced map in the scene map is obtained.
In step S130, map content data corresponding to each map base unit 200 in the first stitched map is obtained from the map configuration table according to the position of each map base unit 200 in the first stitched map in the scene map.
In an exemplary embodiment of the present disclosure, map content data corresponding to each map base unit 200 is searched in a map configuration table according to the position of each map base unit 200 in a scene map, i.e., one or more of relief height data, relief texture data, river region data, or building model data on each map base unit 200 in a first spliced map is acquired.
In step S140, the map content data corresponding to each map base unit 200 in the first stitched map is loaded into the corresponding map base unit 200, respectively, to obtain a first target scene map for display.
In an exemplary embodiment of the present disclosure, each map base unit 200 is rendered according to the acquired map content data corresponding to each map base unit 200, and the first stitched map after rendering is displayed on the graphical user interface as the first target scene map.
In addition, since the first target scene map is formed by splicing a plurality of map base units 200, when the profile height data on each map base unit 200 is inconsistent, a splice crack may occur at the splice edge of each map base unit 200. At this time, it is necessary to process the crack at the spliced edge, for example, the profile height data on the two map base units 200 where the edge crack exists may be acquired, the average height of the two profile height data at the edge is calculated, and the profile height data of the two map base units 200 at the edge crack is modified to the average height. Of course, the averaging process may be performed for two edges at the edge crack, or may be performed for a plurality of points at the edge crack, which is not particularly limited in this disclosure.
In an exemplary embodiment of the present disclosure, fig. 4 shows a flowchart of a specific embodiment of a method for generating a scene map of the present disclosure, and as shown in fig. 4, in step S410, a map configuration table is acquired, where the map configuration table includes a plurality of positions of the scene map and map content data corresponding to the respective positions; in step S420, the map base unit 200 is configured, and the map base unit 200 is stored in the cache pool; in step S430, two or more map base units 200 are loaded in the buffer pool according to the range size of the first scene area according to the first scene area captured in the scene map by the virtual camera; in step S440, stitching is performed in the first scene area according to more than two map base units 200 to obtain a first stitched map; in step S450, the positions of the map base units 200 in the first spliced map in the scene map are obtained, and the map content data corresponding to the map base units 200 are obtained in the map configuration table according to the positions of the map base units 200; in step S460, the map content data corresponding to each map base unit 200 is loaded onto the corresponding map base unit 200 to obtain a first target scene map for display.
In an exemplary embodiment of the present disclosure, fig. 5 shows a schematic flow chart of a dynamic replacement scene map, and as shown in fig. 5, the flow includes at least steps S510 to S540, which are described in detail as follows:
in step S510, in response to a shot-switching instruction for the virtual camera, the virtual camera is controlled to switch shots and a second scene area captured in the game scene after the virtual camera is switched shots is determined.
In an exemplary embodiment of the present disclosure, the lens switching instruction may be to translate the virtual camera, or to adjust the angle of view of the virtual camera, such as zoom in or out, or to raise or lower the pitch angle of the virtual lens, or the like. The shot-switching instruction may be formed according to a sliding or clicking operation performed by the player on the graphical user interface, which is not specifically limited in this disclosure.
In step S520, according to a second scene area captured in the game scene after the camera is switched, loading more than two preset map base units 200, and splicing the more than two map base units 200 in the second scene area to obtain a second spliced map.
In an exemplary embodiment of the present disclosure, after capturing the second scene area, two or more map base units 200 are determined according to the range size of the second scene area and the range size of the preset map base unit 200, and the two or more map base units 200 are spliced in the second scene area to obtain a second spliced map. The method for obtaining the second stitched map is the same as the method for obtaining the first stitched map, and will not be described herein.
In step S530, the map content data corresponding to each map base unit 200 in the second stitched map is obtained from the map configuration table according to the position of each map base unit 200 in the second stitched map in the scene map.
In the exemplary embodiment of the present disclosure, map content data corresponding to the position coordinates of each map base unit 200 is acquired in the map configuration table according to the position coordinates of each map base unit 200 in the second spliced map.
In step S540, the map content data corresponding to each map base unit 200 in the second spliced map is loaded into the corresponding map base unit 200, respectively, to obtain the second target scene map for display.
In an exemplary embodiment of the present disclosure, each map base unit 200 is rendered according to the map content data corresponding to each map base unit 200 in the second stitched map to obtain a second target scene map, and the second target scene map is displayed on the graphical user interface.
In an exemplary embodiment of the present disclosure, after a second target scene map for display is obtained, the second target scene map is displayed, and the first target scene map is deleted. Specifically, the map content data in the first target scene map is deleted, and the map base unit 200 corresponding to the first target scene map is moved into the buffer pool for storage. Of course, after the second target scene map is obtained, the first target scene map may be deleted, and then the second target scene map may be displayed on the graphical user interface, which is not specifically limited in this disclosure.
In an exemplary embodiment of the present disclosure, fig. 6 shows a flowchart of a specific embodiment of the update target scene map of the present disclosure, and as shown in fig. 6, in step S610, the virtual camera is controlled to capture a second scene area in response to a lens switching instruction for the virtual camera; in step S620, two or more map base units 200 are loaded in the cache pool according to the second scene area; in step S630, more than two map base units 200 are spliced in the second scene area to obtain a second spliced map; in step S640, according to the positions of more than two map base units 200 in the second scene area, map content data corresponding to each map base unit 200 in the second spliced map is obtained; in step S650, the corresponding map content data in the second stitched map is loaded into the corresponding map base unit 200 to obtain a second target scene map for display; in step S660, deleting the map content data in the first target scene map, and moving each map base unit 200 corresponding to the first target scene map into the buffer pool for storage; in step S670, a second target scene map is displayed at the graphical user interface.
The following describes an embodiment of an apparatus of the present disclosure, which may be used to perform the above-described method for generating a scene map of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to an embodiment of the method for generating a scene map described in the present disclosure.
Fig. 7 schematically illustrates a block diagram of a scene map generating apparatus according to an embodiment of the present disclosure.
Referring to fig. 7, a scene map generating apparatus 700 according to an embodiment of the present disclosure, the scene map generating apparatus 700 includes: an acquisition configuration module 701, a stitching map module 702, an acquisition data module 703, and a determination map module 704. Specifically:
an acquisition configuration module 701, configured to acquire a map configuration table, where the map configuration table includes a plurality of positions of a scene map and map content data corresponding to each position;
the map splicing module 702 is configured to load more than two preset map base units 200 according to a first scene area captured by the virtual camera in the scene map, and splice more than two map base units 200 in the first scene area to obtain a first spliced map;
an acquiring data module 703, configured to acquire map content data corresponding to each map base unit 200 in the first spliced map from the map configuration table according to the position of each map base unit 200 in the first spliced map in the scene map;
The map determining module 704 is configured to load map content data corresponding to each map base unit 200 in the first stitched map into the corresponding map base unit 200, so as to obtain a first target scene map for display.
In an exemplary embodiment of the present disclosure, the map splicing module 702 may be further applied to determine the target number of preset map base units 200 to be loaded according to the size of the first scene area captured in the scene map by the virtual camera and the size of the preset map base units 200; the map base unit 200 of the target number is loaded.
In an exemplary embodiment of the present disclosure, the map splicing module 702 may also be applied to load more than two map base units 200 from a cache pool for storing a preset number of map base units 200.
In an exemplary embodiment of the present disclosure, the scene map generating apparatus 700 may further include an update map module (not shown in the drawings) that may be used to control the virtual camera shot and determine a second scene area captured in the game scene after the virtual camera shot in response to the shot-cut instruction for the virtual camera; loading more than two preset map base units 200 according to a second scene area captured in the game scene after the quasi-camera is switched, and splicing the more than two map base units 200 in the second scene area to obtain a second spliced map; according to the positions of the map foundation units 200 in the second spliced map in the scene map, map content data corresponding to the map foundation units 200 in the second spliced map are obtained from the map configuration table; map content data corresponding to each map base unit 200 in the second spliced map are respectively loaded into the corresponding map base units 200 to obtain a second target scene map for display.
The specific details of the above-mentioned generating device of each scene map are already described in detail in the corresponding generating method of the scene map, so that they will not be described herein again.
It should be noted that although in the above detailed description several modules or units of a device for performing are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to such an embodiment of the invention is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one storage unit 820, a bus 830 connecting the different system components (including the storage unit 820 and the processing unit 810), and a display unit 840.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present invention described in the above section of the "exemplary method" of the present specification. For example, the processing unit 810 may perform step S110 shown in fig. 1, acquire a map configuration table including a plurality of positions of a scene map and map content data corresponding to the respective positions; step S120, loading more than two preset map base units 200 according to a first scene area captured by the virtual camera in the scene map, and splicing the more than two map base units 200 in the first scene area to obtain a first spliced map; step S130, according to the positions of the map foundation units 200 in the first spliced map in the scene map, map content data corresponding to the map foundation units 200 in the first spliced map are obtained from the map configuration table; in step S140, the map content data corresponding to each map base unit 200 in the first stitched map is loaded into the corresponding map base unit 200, so as to obtain the first target scene map for display.
The storage unit 820 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 8201 and/or cache memory 8202, and may further include Read Only Memory (ROM) 8203.
Storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 1000 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a viewer to interact with the electronic device 800, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 860. As shown, network adapter 860 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
Referring to fig. 9, a program product 900 for implementing the above-described method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. A method for generating a scene map, comprising:
Acquiring a map configuration table, wherein the map configuration table comprises a plurality of positions of a scene map and map content data corresponding to each position;
loading more than two preset map base units according to a first scene area captured by a virtual camera in the scene map, and splicing the more than two map base units in the first scene area to obtain a first spliced map, wherein the preset map base units comprise a preset number of patch models;
acquiring map content data corresponding to each map basic unit in the first spliced map from the map configuration table according to the position of each map basic unit in the first spliced map in the scene map;
and respectively loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic unit so as to obtain a first target scene map for display.
2. The method for generating a scene map according to claim 1, wherein loading more than two preset map base units according to a first scene area captured by a virtual camera in the scene map comprises:
Determining the target number of the preset map basic units to be loaded according to the size of a first scene area captured by the virtual camera in the scene map and the size of the preset map basic units;
loading the target number of the map base units.
3. The method of generating a scene map according to claim 2, wherein the stitching the two or more map base units in the first scene area to obtain a first stitched map includes:
and starting from the position of the origin of the space coordinate system corresponding to the first scene area, sequentially splicing the loaded map base units with the target number in the first scene area to obtain a first spliced map.
4. The method of generating a scene map according to claim 1, wherein after the splicing the two or more map base units in the first scene area to obtain a first spliced map, the method further comprises:
and acquiring the position of each map base unit in the first spliced map in the scene map.
5. The method for generating a scene map according to claim 1, wherein loading more than two preset map base units comprises:
More than two map base units are loaded from a cache pool, wherein the cache pool is used for storing a preset number of map base units.
6. The method according to claim 1, wherein the map configuration table includes two or more sub-map configuration tables, and the map content data includes two or more kinds of map content data; the two or more sub map configuration tables include a plurality of positions of the scene map and map content data of a target category corresponding to each of the positions, respectively.
7. The method of generating a scene map as recited in claim 1, further comprising:
controlling the virtual camera to switch the lens and determining a second scene area captured in a game scene after the virtual camera is switched in response to a lens switching instruction for the virtual camera;
loading more than two preset map base units according to a second scene area captured in the game scene after the quasi-camera is switched, and splicing more than two map base units in the second scene area to obtain a second spliced map;
Acquiring map content data corresponding to each map basic unit in the second spliced map from the map configuration table according to the position of each map basic unit in the second spliced map in the scene map;
and respectively loading the map content data corresponding to each map basic unit in the second spliced map into the corresponding map basic unit so as to obtain a second target scene map for display.
8. The method of generating a scene map as recited in claim 7, wherein after said obtaining a second target scene map for display, the method further comprises:
and displaying the second target scene map and deleting the first target scene map.
9. The method of generating a scene map according to claim 8, wherein said deleting the first target scene map comprises:
deleting the map content data in the first target scene map, and moving the map base unit corresponding to the first target scene map into a cache pool for storage.
10. The method of generating a scene map according to claim 1, wherein the map content data includes at least one of relief height data, relief texture data, river region data, and building model data.
11. A scene map generation apparatus, comprising:
the system comprises an acquisition configuration module, a storage module and a storage module, wherein the acquisition configuration module is used for acquiring a map configuration table, and the map configuration table comprises a plurality of positions of a scene map and map content data corresponding to the positions;
the splicing map module is used for loading more than two preset map basic units according to a first scene area captured by the virtual camera in the scene map, and splicing the more than two map basic units in the first scene area to obtain a first splicing map, wherein the preset map basic units comprise a preset number of patch models;
the data acquisition module is used for acquiring map content data corresponding to each map basic unit in the first spliced map from the map configuration table according to the position of each map basic unit in the first spliced map in the scene map;
and the map determining module is used for respectively loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic unit so as to obtain a first target scene map for display.
12. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a method of generating a scene map according to any one of claims 1 to 10.
13. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the method of generating a scene map as claimed in any one of claims 1 to 10.
CN202010819236.3A 2020-08-14 2020-08-14 Scene map generation method and device, computer storage medium and electronic equipment Active CN111773709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010819236.3A CN111773709B (en) 2020-08-14 2020-08-14 Scene map generation method and device, computer storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010819236.3A CN111773709B (en) 2020-08-14 2020-08-14 Scene map generation method and device, computer storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111773709A CN111773709A (en) 2020-10-16
CN111773709B true CN111773709B (en) 2024-02-02

Family

ID=72762680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010819236.3A Active CN111773709B (en) 2020-08-14 2020-08-14 Scene map generation method and device, computer storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111773709B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112402978A (en) * 2020-11-13 2021-02-26 上海幻电信息科技有限公司 Map generation method and device
CN114529664A (en) * 2020-11-20 2022-05-24 深圳思为科技有限公司 Three-dimensional scene model construction method, device, equipment and computer storage medium
CN112473136B (en) * 2020-11-27 2022-01-11 完美世界(北京)软件科技发展有限公司 Map generation method and device, computer equipment and computer readable storage medium
CN112530012A (en) * 2020-12-24 2021-03-19 网易(杭州)网络有限公司 Virtual earth surface processing method and device and electronic device
CN112584236B (en) * 2020-12-30 2022-10-14 米哈游科技(上海)有限公司 Image erasing method and device, electronic equipment and storage medium
CN112584235B (en) * 2020-12-30 2022-10-28 米哈游科技(上海)有限公司 Image erasing method and device, electronic equipment and storage medium
CN112584237B (en) * 2020-12-30 2022-06-17 米哈游科技(上海)有限公司 Image erasing method and device, electronic equipment and storage medium
CN114661755A (en) * 2022-03-29 2022-06-24 北京百度网讯科技有限公司 Display mode, device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881046A (en) * 2012-09-07 2013-01-16 山东神戎电子股份有限公司 Method for generating three-dimensional electronic map
US9662564B1 (en) * 2013-03-11 2017-05-30 Google Inc. Systems and methods for generating three-dimensional image models using game-based image acquisition
CN107481195A (en) * 2017-08-24 2017-12-15 山东慧行天下文化传媒有限公司 Method and device based on more sight spot region intelligence sectional drawings generation electronic map
WO2020038441A1 (en) * 2018-08-24 2020-02-27 腾讯科技(深圳)有限公司 Map rendering method and apparatus, computer device and storage medium
CN111340704A (en) * 2020-02-25 2020-06-26 网易(杭州)网络有限公司 Map generation method, map generation device, storage medium and electronic device
CN111445576A (en) * 2020-03-17 2020-07-24 腾讯科技(深圳)有限公司 Map data acquisition method and device, storage medium and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881046A (en) * 2012-09-07 2013-01-16 山东神戎电子股份有限公司 Method for generating three-dimensional electronic map
US9662564B1 (en) * 2013-03-11 2017-05-30 Google Inc. Systems and methods for generating three-dimensional image models using game-based image acquisition
CN107481195A (en) * 2017-08-24 2017-12-15 山东慧行天下文化传媒有限公司 Method and device based on more sight spot region intelligence sectional drawings generation electronic map
WO2020038441A1 (en) * 2018-08-24 2020-02-27 腾讯科技(深圳)有限公司 Map rendering method and apparatus, computer device and storage medium
CN111340704A (en) * 2020-02-25 2020-06-26 网易(杭州)网络有限公司 Map generation method, map generation device, storage medium and electronic device
CN111445576A (en) * 2020-03-17 2020-07-24 腾讯科技(深圳)有限公司 Map data acquisition method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN111773709A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111773709B (en) Scene map generation method and device, computer storage medium and electronic equipment
CN109260708B (en) Map rendering method and device and computer equipment
WO2021258994A1 (en) Method and apparatus for displaying virtual scene, and device and storage medium
CN107977141B (en) Interaction control method and device, electronic equipment and storage medium
CN111530073B (en) Game map display control method, storage medium and electronic device
CN108776544B (en) Interaction method and device in augmented reality, storage medium and electronic equipment
US11893081B2 (en) Map display method and apparatus
CN110478898B (en) Configuration method and device of virtual scene in game, storage medium and electronic equipment
CN113559501B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN110889384A (en) Scene switching method and device, electronic equipment and storage medium
CN112807695A (en) Game scene generation method and device, readable storage medium and electronic equipment
CN111462269B (en) Image processing method and device, storage medium and electronic equipment
CN111127607A (en) Animation generation method, device, equipment and medium
CN116212374A (en) Model processing method, device, computer equipment and storage medium
CN112473138B (en) Game display control method and device, readable storage medium and electronic equipment
CN114119831A (en) Snow accumulation model rendering method and device, electronic equipment and readable medium
CN116271840A (en) Thermal energy effect rendering method and device, storage medium and electronic equipment
CN117173378B (en) CAVE environment-based WebVR panoramic data display method, device, equipment and medium
CN117523062B (en) Method, device, equipment and storage medium for previewing illumination effect
CN108446237B (en) Test method, test device, storage medium and electronic equipment
WO2024093610A1 (en) Shadow rendering method and apparatus, electronic device, and readable storage medium
CN111784810B (en) Virtual map display method and device, storage medium and electronic equipment
CN117058277A (en) Virtual object three-dimensional model generation method and device, medium and electronic equipment
CN116977609A (en) Virtual map generation method and device, storage medium and electronic equipment
CN115814423A (en) Virtual vegetation processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant