CN111773709A - Scene map generation method and device, computer storage medium and electronic equipment - Google Patents

Scene map generation method and device, computer storage medium and electronic equipment Download PDF

Info

Publication number
CN111773709A
CN111773709A CN202010819236.3A CN202010819236A CN111773709A CN 111773709 A CN111773709 A CN 111773709A CN 202010819236 A CN202010819236 A CN 202010819236A CN 111773709 A CN111773709 A CN 111773709A
Authority
CN
China
Prior art keywords
map
scene
content data
basic unit
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010819236.3A
Other languages
Chinese (zh)
Other versions
CN111773709B (en
Inventor
黄春昊
谢冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010819236.3A priority Critical patent/CN111773709B/en
Publication of CN111773709A publication Critical patent/CN111773709A/en
Application granted granted Critical
Publication of CN111773709B publication Critical patent/CN111773709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to the technical field of games, and provides a scene map generation method, a scene map generation device, a computer storage medium and electronic equipment, wherein the method comprises the following steps: acquiring a map configuration table; loading more than two preset map basic units according to a first scene area captured by a virtual camera in a scene map, and splicing the more than two map basic units in the first scene area to obtain a first spliced map; acquiring map content data corresponding to each map basic unit in the first spliced map from a map configuration table according to the position of each map basic unit in the first spliced map in the scene map; and respectively loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic units so as to obtain a first target scene map for display. The method and the device can reduce the loading time of the target scene map and save the system loss.

Description

Scene map generation method and device, computer storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of game technologies, and in particular, to a scene map generation method, a scene map generation apparatus, a computer-readable storage medium, and an electronic device.
Background
With the development of the game industry, in order to provide a good game experience for game players, the production of a game scene map is more and more emphasized by game developers. While pursuing the completeness and clarity of a game map, game developers also face a lot of problems, for example, when a scene large map is used, the device memory and the CPU processing capability are limited, which results in a long time for loading the scene large map and a large memory occupation.
In the prior art, a method for loading a scene large map in blocks is adopted, specifically, a game scene is divided into a plurality of blocks according to different spatial regions, and only the scene blocks which can be seen by a virtual camera in a game are loaded when the scene large map is loaded. However, this method requires the art personnel to make the whole game scene well, which is inefficient, and the modification process is complicated when the game scene needs to be modified.
In view of this, there is a need in the art to develop a method and an apparatus for generating a new scene map.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a method for generating a scene map, a device for generating a scene map, a computer-readable storage medium, and an electronic device, which improve the efficiency of generating a scene map at least to some extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, there is provided a method of generating a scene map, the method including: acquiring a map configuration table, wherein the map configuration table comprises a plurality of positions of a scene map and map content data corresponding to each position; loading more than two preset map basic units according to a first scene area captured in the scene map by a virtual camera, and splicing the more than two map basic units in the first scene area to obtain a first spliced map; according to the position of each map basic unit in the first mosaic map in the scene map, acquiring map content data corresponding to each map basic unit in the first mosaic map from the map configuration table; and respectively loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic unit to obtain a first target scene map for display.
In some exemplary embodiments of the present disclosure, the loading of two or more preset map base units according to a first scene area captured by a virtual camera in the scene map includes: determining the target number of the preset map basic units to be loaded according to the size of a first scene area captured in the scene map by the virtual camera and the size of the preset map basic units; and loading the target number of the map base units.
In some exemplary embodiments of the present disclosure, the stitching the two or more map base units in the first scene area to form a first stitched map includes: and sequentially splicing the loaded map basic units of the target number in the first scene area from the position of the origin of the space coordinate system corresponding to the first scene area to obtain a first spliced map.
In some exemplary embodiments of the present disclosure, after the stitching the two or more map base units in the first scene area to obtain a first stitched map, the method further includes: and acquiring the position of each map basic unit in the first mosaic map in the scene map.
In some exemplary embodiments of the present disclosure, the predetermined map base unit includes a predetermined number of patch models.
In some exemplary embodiments of the present disclosure, the loading of two or more preset map base units includes: more than two map basic units are loaded from a cache pool, and the cache pool is used for storing a preset number of map basic units.
In some exemplary embodiments of the present disclosure, the map configuration table includes two or more sub-map configuration tables, and the map content data includes two or more kinds of map content data; the two or more sub-map configuration tables each include a plurality of positions of the scene map and map content data of the target category corresponding to each of the positions.
In some exemplary embodiments of the present disclosure, the method further comprises: responding to a lens switching instruction aiming at the virtual camera, controlling the virtual camera to switch lenses and determining a second scene area captured in the game scene after the virtual camera switches lenses; loading more than two preset map basic units according to a second scene area captured in the game scene after the lens of the virtual camera is switched, and splicing the more than two map basic units in the second scene area to obtain a second spliced map; according to the position of each map basic unit in the second mosaic map in the scene map, acquiring map content data corresponding to each map basic unit in the second mosaic map from the map configuration table; and respectively loading the map content data corresponding to each map basic unit in the second mosaic map into the corresponding map basic unit to obtain a second target scene map for display.
In some exemplary embodiments of the present disclosure, after obtaining the second target scene map for display, the method further comprises: and displaying the second target scene map, and deleting the first target scene map.
In some exemplary embodiments of the present disclosure, the deleting the first target scene map includes: and deleting the map content data in the first target scene map, and moving the map basic unit corresponding to the first target scene map into a cache pool for storage.
In some exemplary embodiments of the present disclosure, the map content data includes at least one of relief height data, relief texture data, river region data, and building model data.
According to an aspect of the present disclosure, there is provided a scene map generation apparatus including: the map configuration table comprises a plurality of positions of a scene map and map content data corresponding to the positions; the map splicing module is used for loading more than two preset map basic units according to a first scene area captured in the scene map by the virtual camera and splicing the more than two map basic units in the first scene area to obtain a first spliced map; the data acquisition module is used for acquiring map content data corresponding to each map basic unit in the first mosaic map from the map configuration table according to the position of each map basic unit in the first mosaic map in the scene map; and the map determining module is used for loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic unit respectively so as to obtain a first target scene map for display.
According to an aspect of the present disclosure, there is provided a computer-readable medium, on which a computer program is stored, which when executed by a processor, implements a method of generating a scene map as described in the above embodiments.
According to an aspect of the present disclosure, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the generation method of the scene map as described in the above embodiments.
As can be seen from the foregoing technical solutions, the scene map generation method and apparatus, the computer-readable storage medium, and the electronic device in the exemplary embodiments of the present disclosure have at least the following advantages and positive effects:
the method for generating the scene map comprises the steps of obtaining a map configuration table, wherein the map configuration table comprises a plurality of positions of the scene map and map content data corresponding to the positions; loading more than two preset map basic units according to a first scene area captured by a virtual camera in a scene map, and splicing the more than two map basic units in the first scene area to obtain a first spliced map; acquiring map content data corresponding to the map basic units in the first spliced map from a map configuration table according to the positions of the map basic units in the first spliced map in the scene map; and finally, loading the map content data corresponding to the map basic units in the first spliced map to the corresponding map basic units respectively to obtain a first target scene map for display. According to the method for generating the scene map, on one hand, the target scene map can be generated according to the map configuration table and the map basic unit, so that the loading time of the scene map is reduced, the memory occupied by the scene map is reduced, and the system loss is saved; on the other hand, the target scene map can be rendered according to the map configuration table, so that the target scene map can be changed according to the change of the map content data in the map configuration table, the flexible modification of the target scene map can be realized, the workload of map makers is reduced, and the generation efficiency of the target scene map is improved; on the other hand, dynamic switching of the target scene map can be achieved according to the capturing range of the virtual camera.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 schematically shows a flow diagram of a method of generating a scene map according to an embodiment of the present disclosure;
fig. 2 schematically shows a structural diagram of a preset map base unit according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a structural schematic of a first scene area captured from a virtual camera according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flowchart of a method of generating a scene map, in accordance with a particular embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow diagram for dynamically changing a scene map, according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a flowchart for updating a target scene map, in accordance with a particular embodiment of the present disclosure;
FIG. 7 schematically shows a block diagram of a generation apparatus of a scene map according to an embodiment of the present disclosure;
FIG. 8 schematically shows a block schematic of an electronic device according to an embodiment of the present disclosure;
fig. 9 schematically shows a program product schematic according to an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In the related art in the field, the scene area is mainly loaded in blocks. The block loading scene area is specifically a scene area which is obtained by dividing a map into a plurality of block areas according to different space areas in a game scene and only can be seen by a virtual camera in the game when the game is loaded. However, the technique of loading scene areas in blocks has the disadvantages that the whole game scene needs to be made in advance by art workers, and the modification process of the game scene is complex.
Based on the problems in the related art, in an embodiment of the present disclosure, a method for generating a scene map is provided, where the method for generating a scene map may be applied to generation of a game scene and generation of a realistic image. The embodiment of the present disclosure is explained by taking a game scene map as an example, fig. 1 shows a flow diagram of a method for generating a scene map, and as shown in fig. 1, the method for generating a scene map at least includes the following steps:
step S110: acquiring a map configuration table, wherein the map configuration table comprises a plurality of positions of a scene map and map content data corresponding to the positions;
step S120: loading more than two preset map basic units according to a first scene area captured in a scene map by a virtual camera, and splicing the more than two map basic units in the first scene area to obtain a first spliced map;
step S130: acquiring map content data corresponding to each map basic unit in the first spliced map from a map configuration table according to the position of each map basic unit in the first spliced map in the scene map;
step S140: and respectively loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic units so as to obtain a first target scene map for display.
According to the method for generating the scene map, on one hand, the target scene map can be generated according to the map configuration table and the map basic unit, the loading time of the target scene map is shortened, the memory occupied by the target scene map is reduced, and the system loss is saved; on the other hand, the target scene map can be rendered according to the map configuration table, so that the target scene map can be changed according to the change of the map content data in the map configuration table, the flexible modification of the target scene map can be realized, the workload of map makers is reduced, and the generation efficiency of the target scene map is improved; on the other hand, dynamic switching of the target scene map can be achieved according to the capturing range of the virtual camera.
The method for generating the scene map in the embodiment of the present disclosure may be executed in a terminal device or a server. The terminal device may be a local terminal device. When the data processing method is operated on the server, the data processing method can be a cloud game.
In an alternative embodiment, cloud gaming refers to a cloud computing-based gaming mode. In the running mode of the cloud game, the running main body of the game program and the game picture presenting main body are separated, the storage and the running of the generation method of the scene map are completed on the cloud game server, and the cloud game client is used for receiving and sending data and presenting the game picture, for example, the cloud game client can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; however, the terminal device performing the game data processing is a cloud game server in the cloud. When a game is played, a player operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the cloud game client through a network, and finally the data are decoded through the cloud game client and the game pictures are output.
In an alternative embodiment, the terminal device may be a local terminal device. The local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In order to make the technical solution of the present disclosure clearer, a method for generating a scene map in the present exemplary embodiment is described in detail below by taking a terminal device as a local terminal device as an example.
In step S110, a map configuration table is acquired, wherein the map configuration table includes a plurality of positions of a scene map and map content data corresponding to the positions.
In an exemplary embodiment of the present disclosure, the scene map is a game map included in a game scene in the game, in which all positions in the game are included, and each position corresponds to a set of map content data. The position in the scene map may be represented in a coordinate form or a reference number form, which is not specifically limited by the present disclosure. The corresponding map content data at each location represents the map state at that location.
In an exemplary embodiment of the present disclosure, the category of the map content data may include at least one of terrain height data, relief texture data, river area data, or building model data, which is not particularly limited by the present disclosure.
In an exemplary embodiment of the present disclosure, the map configuration table includes a plurality of positions in the scene map, and further includes map content data corresponding to the respective positions. That is, each position in the scene map can find the map content data corresponding to the position according to the position coordinates or the position label in the map configuration table.
For example, the map configuration table may be stored in a database in a table form, and all the positions in the scene map are stored in the map configuration table, and the positions may be arranged in an order in the scene map. The map configuration table further stores map content data corresponding to each location, and one location in the map configuration table may correspond to one kind of map content data or to a plurality of kinds of map content data, which is not particularly limited by the present disclosure.
In an exemplary embodiment of the present disclosure, the map configuration table may include two or more sub-map configuration tables, and the two or more sub-map configuration tables may respectively include a plurality of locations in the scene map and map content data of the target category corresponding to the respective locations. Wherein the map content data may include more than two kinds of map content data. The map content data of the target type corresponding to each position in the two or more sub-map arrangement tables may be set according to actual conditions, for example, the map content data may be set according to the importance of each type of data included in the map content data, and the map content data of the same importance level among the plurality of types of data may be stored in the same sub-map arrangement table. Of course, different sub-map configuration tables may be configured according to the size of the data volume of different types of data, which is not specifically limited by the present disclosure.
For example, the map configuration table includes four sub-map configuration tables, which respectively include a first sub-map configuration table, a second sub-map configuration table, a third sub-map configuration table, and a fourth sub-map configuration table. The first sub-map configuration table is used for configuring terrain height data, the second sub-map configuration table is used for configuring landform texture data, the third sub-map configuration table is used for configuring river region data, and the fourth sub-map configuration table is used for configuring building model data. Of course, the map configuration table may further include two or three sub-map configuration tables, and each sub-map configuration table may further include two, three or more types of map content data, which is not particularly limited in this disclosure.
In addition, the map configuration table may further include one or two sub-map configuration tables, and each sub-map configuration table may include one or more kinds of map content data. For example, the map configuration table includes a sub-map configuration table, and the sub-map configuration table includes terrain height data, relief texture data, river region data, or building model data. For another example, the map configuration table includes two sub-map configuration tables, which respectively include a first sub-map configuration table and a second sub-map configuration table. The first sub-map configuration table may include terrain height data and relief texture data, and the second sub-map configuration table may include river region data and building model data. Of course, the first map configuration table may also include only terrain height data or only landform texture data, and the second map configuration table may also include only river region data or building model data, which is not specifically limited by the present disclosure.
In step S120, two or more preset map base units are loaded according to a first scene area captured by the virtual camera in the scene map, and the two or more map base units are spliced in the first scene area to obtain a first spliced map.
In an exemplary embodiment of the present disclosure, in the game engine, the virtual camera is a special object, which can be placed at any corner in the game scene, and can also be used to acquire the desired game scene by changing the position, rotation angle, view, projection and other parameters of the virtual camera, and the game scene captured in the scene map according to the parameters of the virtual camera is adjusted to be the first scene area.
In an exemplary embodiment of the present disclosure, the preset map base unit includes a preset number of patch models, each patch model may be a blank plane, the map content data on each blank plane is 0, and the size of each blank plane may be set according to an actual situation, where the smaller the blank plane is, the higher the accuracy of the obtained target scene map is, and the more details are presented. The preset number can be set according to actual conditions, and the larger the preset number is, the higher the accuracy of the obtained target scene map is, and the more details are presented on the target scene map. The present disclosure is not particularly limited. For example, fig. 2 shows a schematic structural diagram of a preset map basic unit, and as shown in fig. 2, the preset map basic unit 200 is composed of 3 × 3 patch models 201.
In an exemplary embodiment of the present disclosure, loading more than two preset map base units 200 according to a first scene area captured by a virtual camera in a scene map specifically includes: determining the number of targets of the preset map basic unit 200 to be loaded according to the size of the first scene area captured by the virtual camera in the scene map and the size of the preset map basic unit 200, and loading the map basic units 200 of the number of targets.
For example, if the range size of the first scene area captured by the virtual camera is (50, 50) to (100 ), and the range size of each map basic unit 200 is (10, 10), the target number of the map basic units 200 to be loaded is 5 × 5 map basic units 200.
In an exemplary embodiment of the present disclosure, a preset number of map base units 200 are stored in the cache pool, and the preset number may be set according to actual situations. The preset number of the map base units 200 in the cache pool may be determined according to the size of the maximum scene area captured by the virtual camera, for example, the scene area captured by the virtual camera under a certain viewing angle is the largest, the maximum scene area captured by the virtual camera is obtained, and the preset number is determined according to the size of the maximum scene area and the size of the map base units 200. Of course, the preset number may also be determined according to the number of the maximum number of map basic units 200 that can be accommodated in the cache pool, for example, 50 × 50 map basic units 200 may be used, or 100 × 100 map basic units 200 may be used, which is not specifically limited in this disclosure.
For example, fig. 3 shows a schematic structural diagram of a first scene area captured by a virtual camera, as shown in fig. 3, a map base unit 200 is constructed according to a spatial coordinate system 302, and a patch of the map base unit 200 is parallel to an xoy plane, assuming that the virtual camera views a scene map at a certain angle, a plane in which the map base unit 200 is located intersects with a view cone 301 of the virtual camera to obtain a quadrilateral plane, four vertices of the quadrilateral plane are (lb, rb, rt, lt), and the quadrilateral plane is a range of the scene area that can be captured by the virtual camera, and according to a range size of the quadrilateral plane and a size of the map base unit 200, a preset number of map base units 200 required to cover the first scene area can be calculated, that is, the preset number of map base units 200 need to be loaded.
In the exemplary embodiment of the present disclosure, more than two preset map base units 200 are loaded, and specifically, more than two map base units 200 may be loaded from a cache pool.
In an exemplary embodiment of the present disclosure, stitching two or more map base units 200 in a first scene area to form a first stitched map specifically includes: starting from the position of the origin of the spatial coordinate system 302 corresponding to the first scene area, the map base units 200 of the loaded target number are sequentially spliced in the first scene area to obtain a first spliced map.
The space coordinate system 302 is established by taking a starting point of the whole scene map in the game as an origin, where the starting point of the scene map may be a point with the largest position coordinate in the scene map, or may be a point with the smallest position coordinate in the scene map, and the disclosure does not specifically limit this. Of course, the spatial coordinate system 302 may also be established by using the vertex of the first scene area as the origin, and the vertex of the first scene area may be a point with the smallest position coordinate in the first scene area, or may be a point with the largest position coordinate, which is not specifically limited by the present disclosure.
In addition, as shown in fig. 3, the spatial coordinate system 302 may be a three-dimensional cartesian coordinate system, a plane in which an x-axis and a y-axis of the spatial coordinate system 302 are located may be used to represent a plane size of the first scene area, and a z-axis of the spatial coordinate system 302 may be used to represent a terrain height of the first scene area.
Taking the point with the smallest position coordinate in the first scene area as the origin to establish the spatial coordinate system 302 as an example, starting from the origin of the spatial coordinate system 302, the map base unit 200 is gradually laid along the x-axis of the spatial coordinate system 302, and after the x-axis in the first scene area is fully laid, the map base unit 200 is further gradually laid along the y-axis of the spatial coordinate system 302 until the entire first scene area is fully laid by the map base unit 200. The map base unit 200 may also be laid in the first scene area in increments along the y-axis and then along the x-axis, which is not specifically limited by the present disclosure.
Of course, the plane of the y-axis and the z-axis of the spatial coordinate system 302 may be used to represent the plane size of the first scene area, and the x-axis of the spatial coordinate system 302 may be used to represent the terrain height of the first scene area. The present disclosure does not specifically limit this.
In an exemplary embodiment of the present disclosure, after two or more map base units 200 are spliced in a first scene area to obtain a first spliced map, the positions of the map base units 200 in the first spliced map in the scene map are obtained.
In step S130, map content data corresponding to each map basic unit 200 in the first merged map is acquired from the map configuration table according to the position of each map basic unit 200 in the first merged map in the scene map.
In an exemplary embodiment of the present disclosure, according to the position of each map basic unit 200 in the scene map, map content data corresponding to each map basic unit 200 is searched in the map configuration table according to the position of each map basic unit 200 in the scene map, that is, one or more of relief height data, topographic texture data, river region data, or building model data on each map basic unit 200 in the first merged map is obtained.
In step S140, the map content data corresponding to each map base unit 200 in the first merged map is loaded into the corresponding map base unit 200, respectively, so as to obtain a first target scene map for display.
In an exemplary embodiment of the present disclosure, each map base unit 200 is rendered according to the acquired map content data corresponding to each map base unit 200, and the rendered first mosaic map is displayed on the graphical user interface as the first target scene map.
In addition, since the first target scene map is formed by splicing a plurality of map base units 200, when the terrain height data on each map base unit 200 is inconsistent, a splicing crack may occur at the splicing edge of each map base unit 200. At this time, the crack at the spliced edge needs to be processed, for example, the terrain height data on the two map base units 200 with the edge crack may be obtained, the average height of the two terrain height data at the edge may be calculated, and the terrain height data of the two map base units 200 at the edge crack may be modified into the average height. Of course, the averaging process may be performed for two edges at the edge crack, or may be performed for a plurality of points at the edge crack, which is not particularly limited in this disclosure.
In an exemplary embodiment of the present disclosure, fig. 4 is a flowchart illustrating a specific embodiment of a method for generating a scene map of the present disclosure, and as shown in fig. 4, in step S410, a map configuration table is obtained, where the map configuration table includes a plurality of positions of the scene map and map content data corresponding to the positions; in step S420, the map base unit 200 is configured, and the map base unit 200 is stored in the cache pool; in step S430, according to a first scene area captured in the scene map by the virtual camera, loading two or more map base units 200 in the cache pool according to the range size of the first scene area; in step S440, a first map is obtained by stitching in a first scene area according to two or more map base units 200; in step S450, the position of each map basic unit 200 in the first merged map in the scene map is obtained, and the map content data corresponding to each map basic unit 200 is obtained in the map configuration table according to the position of each map basic unit 200; in step S460, the map content data corresponding to each map base unit 200 is loaded onto the corresponding map base unit 200 to obtain the first target scene map for display.
In an exemplary embodiment of the disclosure, fig. 5 shows a schematic flowchart of dynamically changing a scene map, and as shown in fig. 5, the flowchart at least includes steps S510 to S540, which are described in detail as follows:
in step S510, in response to a lens switching instruction for the virtual camera, the virtual camera is controlled to switch lenses and a second scene area captured in the game scene after the virtual camera switches lenses is determined.
In an exemplary embodiment of the present disclosure, the lens switching instruction may be to translate the virtual camera, or may be to adjust a viewing angle of the virtual camera, such as to zoom in or zoom out, or may be to raise or lower a pitch angle of the virtual lens, or the like. The shot-cut command may be formed by a sliding or clicking operation performed by a player on a graphical user interface, which is not specifically limited by the present disclosure.
In step S520, according to a second scene area captured in the game scene after the lens of the pseudo-camera is switched, two or more preset map base units 200 are loaded, and the two or more map base units 200 are spliced in the second scene area to obtain a second spliced map.
In an exemplary embodiment of the present disclosure, after the second scene area is captured, two or more map base units 200 are determined according to a range size of the second scene area and a preset range size of the map base unit 200, and the two or more map base units 200 are spliced in the second scene area to obtain a second spliced map. The method for obtaining the second mosaic map is the same as the method for obtaining the first mosaic map, and is not described herein again.
In step S530, according to the position of each map basic unit 200 in the second merged map in the scene map, the map content data corresponding to each map basic unit 200 in the second merged map is acquired from the map configuration table.
In an exemplary embodiment of the present disclosure, map content data corresponding to the position coordinates of each map base unit 200 is acquired in the map configuration table according to the position coordinates of each map base unit 200 in the second merged map.
In step S540, the map content data corresponding to each map base unit 200 in the second merged map is loaded into the corresponding map base unit 200, respectively, so as to obtain a second target scene map for display.
In an exemplary embodiment of the present disclosure, each map base unit 200 in the second merged map is rendered according to the map content data corresponding to each map base unit 200 to obtain a second target scene map, and the second target scene map is displayed on the graphical user interface.
In an exemplary embodiment of the present disclosure, after the second target scene map for display is obtained, the second target scene map is displayed, and the first target scene map is deleted. Specifically, the map content data in the first target scene map is deleted, and the map base unit 200 corresponding to the first target scene map is moved into the cache pool for storage. Of course, after the second target scene map is obtained, the first target scene map may be deleted first, and then the second target scene map may be displayed on the graphical user interface, which is not limited in this disclosure.
In an exemplary embodiment of the present disclosure, fig. 6 shows a flowchart of a specific embodiment of updating a target scene map of the present disclosure, and as shown in fig. 6, in step S610, in response to a shot-cut instruction for a virtual camera, the virtual camera is controlled to capture a second scene area; in step S620, two or more map base units 200 are loaded in the cache pool according to the second scene area; in step S630, more than two map base units 200 are stitched in the second scene area to obtain a second stitched map; in step S640, map content data corresponding to each map basic unit 200 in the second mosaic map is obtained according to the positions of more than two map basic units 200 in the second scene area; in step S650, loading the corresponding map content data in the second mosaic map into the corresponding map base unit 200 to obtain a second target scene map for display; in step S660, deleting the map content data in the first target scene map, and moving each map base unit 200 corresponding to the first target scene map into the cache pool for storage; in step S670, a second target scene map is displayed on the graphical user interface.
The following describes embodiments of the apparatus of the present disclosure, which may be used to perform the method for generating the scene map of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method for generating a scene map described above in the present disclosure.
Fig. 7 schematically shows a block diagram of a generation apparatus of a scene map according to an embodiment of the present disclosure.
Referring to fig. 7, a scene map generation apparatus 700 according to an embodiment of the present disclosure, the scene map generation apparatus 700 including: the map display system comprises an acquisition configuration module 701, a map splicing module 702, a data acquisition module 703 and a map determining module 704. Specifically, the method comprises the following steps:
an obtaining and configuring module 701, configured to obtain a map configuration table, where the map configuration table includes multiple locations of a scene map and map content data corresponding to the locations;
the map splicing module 702 is configured to load two or more preset map base units 200 according to a first scene area captured by the virtual camera in the scene map, and splice the two or more map base units 200 in the first scene area to obtain a first spliced map;
the data obtaining module 703 is configured to obtain, from the map configuration table, map content data corresponding to each map basic unit 200 in the first merged map according to the position of each map basic unit 200 in the first merged map in the scene map;
the map determining module 704 is configured to load the map content data corresponding to each map basic unit 200 in the first merged map into the corresponding map basic unit 200, respectively, so as to obtain a first target scene map for display.
In an exemplary embodiment of the present disclosure, the map stitching module 702 may be further applied to determine the target number of the preset map basic units 200 to be loaded according to the size of the first scene area captured by the virtual camera in the scene map and the size of the preset map basic units 200; a target number of map base units 200 are loaded.
In an exemplary embodiment of the present disclosure, the map splicing module 702 may be further applied to load two or more map basic units 200 from a cache pool, where the cache pool is used to store a preset number of map basic units 200.
In an exemplary embodiment of the present disclosure, the scene map generating apparatus 700 may further include an update map module (not shown in the figure), which may be configured to control the virtual camera to switch the shot and determine a second scene area captured in the game scene after the virtual camera switches the shot in response to a shot switching instruction for the virtual camera; loading more than two preset map basic units 200 according to a second scene area captured in the game scene after the lens of the virtual camera is switched, and splicing the more than two map basic units 200 in the second scene area to obtain a second spliced map; according to the positions of the map basic units 200 in the second mosaic map in the scene map, acquiring map content data corresponding to the map basic units 200 in the second mosaic map from a map configuration table; and respectively loading the map content data corresponding to each map basic unit 200 in the second mosaic map into the corresponding map basic units 200 to obtain a second target scene map for display.
The specific details of the generating device for each scene map are already described in detail in the generating method for the corresponding scene map, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the apparatus for performing are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to this embodiment of the invention is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in fig. 8, electronic device 800 is in the form of a general purpose computing device. The components of the electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, a bus 830 connecting different system components (including the memory unit 820 and the processing unit 810), and a display unit 840.
Wherein the storage unit stores program code that is executable by the processing unit 810 to cause the processing unit 810 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 810 may execute step S110 shown in fig. 1, and acquire a map configuration table, where the map configuration table includes a plurality of positions of a scene map and map content data corresponding to the positions; step S120, loading more than two preset map basic units 200 according to a first scene area captured by a virtual camera in a scene map, and splicing the more than two map basic units 200 in the first scene area to obtain a first spliced map; step S130, acquiring map content data corresponding to each map basic unit 200 in the first mosaic map from a map configuration table according to the position of each map basic unit 200 in the first mosaic map in the scene map; in step S140, the map content data corresponding to each map basic unit 200 in the first merged map are respectively loaded into the corresponding map basic units 200, so as to obtain a first target scene map for display.
The storage unit 820 may include readable media in the form of volatile memory units such as a random access memory unit (RAM)8201 and/or a cache memory unit 8202, and may further include a read only memory unit (ROM) 8203.
The storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 830 may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 1000 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a viewer to interact with the electronic device 800, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 800 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 850. Also, the electronic device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 860. As shown, the network adapter 860 communicates with the other modules of the electronic device 800 via the bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
Referring to fig. 9, a program product 900 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (14)

1. A method for generating a scene map, comprising:
acquiring a map configuration table, wherein the map configuration table comprises a plurality of positions of a scene map and map content data corresponding to each position;
loading more than two preset map basic units according to a first scene area captured in the scene map by a virtual camera, and splicing the more than two map basic units in the first scene area to obtain a first spliced map;
according to the position of each map basic unit in the first mosaic map in the scene map, acquiring map content data corresponding to each map basic unit in the first mosaic map from the map configuration table;
and respectively loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic unit to obtain a first target scene map for display.
2. The method for generating a scene map according to claim 1, wherein the loading of two or more preset map base units according to a first scene area captured by a virtual camera in the scene map comprises:
determining the target number of the preset map basic units to be loaded according to the size of a first scene area captured in the scene map by the virtual camera and the size of the preset map basic units;
and loading the target number of the map base units.
3. The method for generating a scene map according to claim 2, wherein the stitching the two or more map base units in the first scene area to obtain a first stitched map comprises:
and sequentially splicing the loaded map basic units of the target number in the first scene area from the position of the origin of the space coordinate system corresponding to the first scene area to obtain a first spliced map.
4. The method for generating a scene map according to claim 1, wherein after the stitching the two or more map base units in the first scene area to obtain a first stitched map, the method further comprises:
and acquiring the position of each map basic unit in the first mosaic map in the scene map.
5. The method for generating a scene map according to claim 1, wherein the predetermined map base unit includes a predetermined number of patch models.
6. The method for generating a scene map according to claim 1, wherein the loading of two or more preset map base units comprises:
more than two map basic units are loaded from a cache pool, and the cache pool is used for storing a preset number of map basic units.
7. The method for generating a scene map according to claim 1, wherein the map configuration table includes two or more sub-map configuration tables, and the map content data includes two or more kinds of map content data; the two or more sub-map configuration tables each include a plurality of positions of the scene map and map content data of the target category corresponding to each of the positions.
8. The method for generating a scene map according to claim 1, further comprising:
responding to a lens switching instruction aiming at the virtual camera, controlling the virtual camera to switch lenses and determining a second scene area captured in the game scene after the virtual camera switches lenses;
loading more than two preset map basic units according to a second scene area captured in the game scene after the lens of the virtual camera is switched, and splicing the more than two map basic units in the second scene area to obtain a second spliced map;
according to the position of each map basic unit in the second mosaic map in the scene map, acquiring map content data corresponding to each map basic unit in the second mosaic map from the map configuration table;
and respectively loading the map content data corresponding to each map basic unit in the second mosaic map into the corresponding map basic unit to obtain a second target scene map for display.
9. The method for generating a scene map according to claim 8, wherein after obtaining the second target scene map for display, the method further comprises:
and displaying the second target scene map, and deleting the first target scene map.
10. The method for generating a scene map according to claim 9, wherein the deleting the first target scene map includes:
and deleting the map content data in the first target scene map, and moving the map basic unit corresponding to the first target scene map into a cache pool for storage.
11. The method for generating a scene map of claim 1, wherein the map content data includes at least one of terrain height data, relief texture data, river region data, and building model data.
12. An apparatus for generating a scene map, comprising:
the map configuration table comprises a plurality of positions of a scene map and map content data corresponding to the positions;
the map splicing module is used for loading more than two preset map basic units according to a first scene area captured in the scene map by the virtual camera and splicing the more than two map basic units in the first scene area to obtain a first spliced map;
the data acquisition module is used for acquiring map content data corresponding to each map basic unit in the first mosaic map from the map configuration table according to the position of each map basic unit in the first mosaic map in the scene map;
and the map determining module is used for loading the map content data corresponding to each map basic unit in the first spliced map into the corresponding map basic unit respectively so as to obtain a first target scene map for display.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of generating a scene map according to any one of claims 1 to 11.
14. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method of generating a scene map as claimed in any one of claims 1 to 11.
CN202010819236.3A 2020-08-14 2020-08-14 Scene map generation method and device, computer storage medium and electronic equipment Active CN111773709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010819236.3A CN111773709B (en) 2020-08-14 2020-08-14 Scene map generation method and device, computer storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010819236.3A CN111773709B (en) 2020-08-14 2020-08-14 Scene map generation method and device, computer storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111773709A true CN111773709A (en) 2020-10-16
CN111773709B CN111773709B (en) 2024-02-02

Family

ID=72762680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010819236.3A Active CN111773709B (en) 2020-08-14 2020-08-14 Scene map generation method and device, computer storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111773709B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112402978A (en) * 2020-11-13 2021-02-26 上海幻电信息科技有限公司 Map generation method and device
CN112473136A (en) * 2020-11-27 2021-03-12 完美世界(北京)软件科技发展有限公司 Map generation method and device, computer equipment and computer readable storage medium
CN112530012A (en) * 2020-12-24 2021-03-19 网易(杭州)网络有限公司 Virtual earth surface processing method and device and electronic device
CN112584237A (en) * 2020-12-30 2021-03-30 米哈游科技(上海)有限公司 Image erasing method and device, electronic equipment and storage medium
CN112584236A (en) * 2020-12-30 2021-03-30 米哈游科技(上海)有限公司 Image erasing method and device, electronic equipment and storage medium
CN112584235A (en) * 2020-12-30 2021-03-30 米哈游科技(上海)有限公司 Image erasing method and device, electronic equipment and storage medium
CN114529664A (en) * 2020-11-20 2022-05-24 深圳思为科技有限公司 Three-dimensional scene model construction method, device, equipment and computer storage medium
CN114661755A (en) * 2022-03-29 2022-06-24 北京百度网讯科技有限公司 Display mode, device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881046A (en) * 2012-09-07 2013-01-16 山东神戎电子股份有限公司 Method for generating three-dimensional electronic map
US9662564B1 (en) * 2013-03-11 2017-05-30 Google Inc. Systems and methods for generating three-dimensional image models using game-based image acquisition
CN107481195A (en) * 2017-08-24 2017-12-15 山东慧行天下文化传媒有限公司 Method and device based on more sight spot region intelligence sectional drawings generation electronic map
WO2020038441A1 (en) * 2018-08-24 2020-02-27 腾讯科技(深圳)有限公司 Map rendering method and apparatus, computer device and storage medium
CN111340704A (en) * 2020-02-25 2020-06-26 网易(杭州)网络有限公司 Map generation method, map generation device, storage medium and electronic device
CN111445576A (en) * 2020-03-17 2020-07-24 腾讯科技(深圳)有限公司 Map data acquisition method and device, storage medium and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881046A (en) * 2012-09-07 2013-01-16 山东神戎电子股份有限公司 Method for generating three-dimensional electronic map
US9662564B1 (en) * 2013-03-11 2017-05-30 Google Inc. Systems and methods for generating three-dimensional image models using game-based image acquisition
CN107481195A (en) * 2017-08-24 2017-12-15 山东慧行天下文化传媒有限公司 Method and device based on more sight spot region intelligence sectional drawings generation electronic map
WO2020038441A1 (en) * 2018-08-24 2020-02-27 腾讯科技(深圳)有限公司 Map rendering method and apparatus, computer device and storage medium
CN111340704A (en) * 2020-02-25 2020-06-26 网易(杭州)网络有限公司 Map generation method, map generation device, storage medium and electronic device
CN111445576A (en) * 2020-03-17 2020-07-24 腾讯科技(深圳)有限公司 Map data acquisition method and device, storage medium and electronic device

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112402978A (en) * 2020-11-13 2021-02-26 上海幻电信息科技有限公司 Map generation method and device
CN114529664A (en) * 2020-11-20 2022-05-24 深圳思为科技有限公司 Three-dimensional scene model construction method, device, equipment and computer storage medium
CN112473136A (en) * 2020-11-27 2021-03-12 完美世界(北京)软件科技发展有限公司 Map generation method and device, computer equipment and computer readable storage medium
WO2022111038A1 (en) * 2020-11-27 2022-06-02 完美世界(北京)软件科技发展有限公司 Map generation method and apparatus, computer device and computer readable storage medium
CN112530012A (en) * 2020-12-24 2021-03-19 网易(杭州)网络有限公司 Virtual earth surface processing method and device and electronic device
CN112584237A (en) * 2020-12-30 2021-03-30 米哈游科技(上海)有限公司 Image erasing method and device, electronic equipment and storage medium
CN112584236A (en) * 2020-12-30 2021-03-30 米哈游科技(上海)有限公司 Image erasing method and device, electronic equipment and storage medium
CN112584235A (en) * 2020-12-30 2021-03-30 米哈游科技(上海)有限公司 Image erasing method and device, electronic equipment and storage medium
CN112584237B (en) * 2020-12-30 2022-06-17 米哈游科技(上海)有限公司 Image erasing method and device, electronic equipment and storage medium
CN114661755A (en) * 2022-03-29 2022-06-24 北京百度网讯科技有限公司 Display mode, device and electronic equipment

Also Published As

Publication number Publication date
CN111773709B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN111773709B (en) Scene map generation method and device, computer storage medium and electronic equipment
US20220249949A1 (en) Method and apparatus for displaying virtual scene, device, and storage medium
CN111530073B (en) Game map display control method, storage medium and electronic device
CN109544663B (en) Virtual scene recognition and interaction key position matching method and device of application program
US11798223B2 (en) Potentially visible set determining method and apparatus, device, and storage medium
CN108776544B (en) Interaction method and device in augmented reality, storage medium and electronic equipment
CN113289327A (en) Display control method and device of mobile terminal, storage medium and electronic equipment
CN113559501B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN112494941A (en) Display control method and device of virtual object, storage medium and electronic equipment
CN112807695A (en) Game scene generation method and device, readable storage medium and electronic equipment
CN110555916B (en) Terrain editing method and device for virtual scene, storage medium and electronic equipment
CN116212374A (en) Model processing method, device, computer equipment and storage medium
CN115861577A (en) Method, device and equipment for editing posture of virtual field scene and storage medium
CN112473138B (en) Game display control method and device, readable storage medium and electronic equipment
CN113975802A (en) Game control method, device, storage medium and electronic equipment
CN108499102B (en) Information interface display method and device, storage medium and electronic equipment
CN113769403A (en) Virtual object moving method and device, readable storage medium and electronic equipment
CN111784810B (en) Virtual map display method and device, storage medium and electronic equipment
CN114917582A (en) Virtual scene display method and device, readable storage medium and electronic equipment
CN116271840A (en) Thermal energy effect rendering method and device, storage medium and electronic equipment
CN116173502A (en) Mapping control method and device, storage medium and electronic equipment
CN117959704A (en) Virtual model placement method and device, electronic equipment and readable storage medium
CN117442965A (en) Icon generation method and device for model, electronic equipment and storage medium
CN117899467A (en) Game interaction method, game interaction device, computer readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant