CN109939440B - Three-dimensional game map generation method and device, processor and terminal - Google Patents

Three-dimensional game map generation method and device, processor and terminal Download PDF

Info

Publication number
CN109939440B
CN109939440B CN201910309444.6A CN201910309444A CN109939440B CN 109939440 B CN109939440 B CN 109939440B CN 201910309444 A CN201910309444 A CN 201910309444A CN 109939440 B CN109939440 B CN 109939440B
Authority
CN
China
Prior art keywords
information
dimensional
map
target
map model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910309444.6A
Other languages
Chinese (zh)
Other versions
CN109939440A (en
Inventor
蔡泽野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201910309444.6A priority Critical patent/CN109939440B/en
Publication of CN109939440A publication Critical patent/CN109939440A/en
Application granted granted Critical
Publication of CN109939440B publication Critical patent/CN109939440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method, a device, a processor and a terminal for generating a three-dimensional game map. Wherein the method comprises the following steps: creating a first game scene, wherein the first game scene is a blank game scene; adding a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model to a first game scene; creating a rendering target, and rendering scene contents contained in the first game scene to a target map bound by the rendering target; rendering the target map to a target object in the current game interface. The invention solves the technical problems that the two-dimensional game map implementation mode provided in the related technology lacks intuitiveness and has limitation on the control operation of the game map.

Description

Three-dimensional game map generation method and device, processor and terminal
Technical Field
The present invention relates to the field of computers, and in particular, to a method, an apparatus, a processor, and a terminal for generating a three-dimensional game map.
Background
Currently, in most game scenes, a map is usually set. The map is typically a thumbnail of the game scene. Some important places, route information and the like in the game scene are marked in the map. The game player can quickly learn the current position of the game role controlled by the player through the map, the route to the destination and other information.
Related art maps within a game scene are typically represented in a two-dimensional (2D) implementation, i.e., in the form of one 2D picture. In terms of game performance, it is common to add some important information on 2D pictures, such as: important place names, routes, the current game character location, etc. In terms of game operations, zooming and moving of a map are generally supported, and meanwhile, some operations such as punctuation and line drawing are supported by a game player.
However, such a 2D map implementation can exhibit a certain mapping function, but has a significant technical disadvantage of insufficient expressivity. For example: although place names or place information can be noted on a map, such a textual description lacks intuitiveness for a game player; meanwhile, in terms of operation, the control operation for the 2D map is generally limited to only a zoom map operation and a moving map operation.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
At least some embodiments of the present invention provide a method, an apparatus, a processor, and a terminal for generating a three-dimensional game map, so as to at least solve the technical problems that the implementation of the two-dimensional game map provided in the related art lacks intuitiveness, and has limitation on the control operation of the game map.
According to one embodiment of the present invention, there is provided a method for generating a three-dimensional game map, including:
creating a first game scene, wherein the first game scene is a blank game scene; adding a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model to a first game scene; creating a rendering target, and rendering scene contents contained in the first game scene to a target map bound by the rendering target; rendering the target map to a target object in the current game interface.
Optionally, adding the three-dimensional map model to the first game scene includes: acquiring a first information set, wherein the first information set comprises: position information, rotation information and scaling information of the three-dimensional map model in the first game scene; a three-dimensional map model is added to the first game scene in accordance with the first set of information.
Optionally, adding map information to the first game scene includes: calculating a third information set using the first information set and a second information set, wherein the second information set includes: position information, rotation information, and zoom information of the map information relative to the three-dimensional map model, the third set of information comprising: position information, rotation information and zoom information of map information in a first game scene; map information is added to the first game scene in accordance with the third set of information.
Optionally, after rendering the scene content contained in the first game scene to the target map, the method further includes: performing image processing on the target map, wherein the image processing comprises at least one of the following: color superimposing processing, highlighting processing.
Optionally, after rendering the target map to the target object, further comprising: receiving an operation instruction acting on the three-dimensional map model; and executing control operation corresponding to the operation instruction on the three-dimensional map model.
Optionally, when the operation instruction is a click instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model includes: acquiring a first two-dimensional coordinate of a screen click position under a screen coordinate system; converting the two-dimensional coordinates into first three-dimensional coordinates under world coordinates; converting the first three-dimensional coordinate into a second two-dimensional coordinate under the UV coordinate system of the target object; converting the second two-dimensional coordinates into second three-dimensional coordinates in a three-dimensional space coordinate system of the first game scene; and adding a marking model at the position where the second three-dimensional coordinate is located.
Optionally, when the operation instruction is a movement instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model includes: acquiring first position information, a first vector corresponding to a moving instruction and a first scale, wherein the first position information is an initial position of a three-dimensional map model before receiving the moving instruction; determining second position information by adopting the first position information, the first vector and the first scale, wherein the second position information is a target position of the three-dimensional map model after receiving the moving instruction; and performing a moving operation on the three-dimensional map model according to the second position information.
Optionally, when the operation instruction is a rotation instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model includes: when the second vector corresponding to the rotation instruction is located in the first interval, the three-dimensional map model is controlled to rotate around the first coordinate axis according to the first direction; when the second vector is located in the second interval, the three-dimensional map model is controlled to rotate around the first coordinate axis according to a second direction, wherein the second direction is opposite to the first direction; when the second vector is located in the third interval, the three-dimensional map model is controlled to rotate around the second coordinate axis according to the first direction; and when the second vector is positioned in the fourth interval, controlling the three-dimensional map model to rotate around the second coordinate axis according to the second direction.
Optionally, when the operation instruction is a zoom instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model includes: acquiring an initial scaling value, a scaling distance and a second scale; determining a target scaling value by adopting the initial scaling value, the scaling distance and the second scale; and performing scaling operation on the three-dimensional map model according to the target scaling value.
Optionally, after performing a control operation corresponding to the operation instruction on the three-dimensional map model, the method further includes: acquiring an updated first information set obtained by executing control operation on the three-dimensional map model; determining an updated third information set based on the updated first information set; determining updated scene content contained in the first game scene according to the updated first information set and the updated third information set; rendering the updated scene content to a target map; rendering the target map to the target object.
According to one embodiment of the present invention, there is also provided a three-dimensional game map generating apparatus, including:
the first creation module is used for creating a first game scene, wherein the first game scene is a blank game scene; an adding module, configured to add a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model to a first game scene; the second creation module is used for creating a rendering target and rendering scene contents contained in the first game scene to a target map bound with the rendering target; and the rendering module is used for rendering the target map to the target object in the current game interface.
Optionally, the adding module includes: a first obtaining unit, configured to obtain a first information set, where the first information set includes: position information, rotation information and scaling information of the three-dimensional map model in the first game scene; a first adding unit for adding the three-dimensional map model to the first game scene according to the first information set.
Optionally, the adding module includes: a calculating unit configured to calculate a third information set using the first information set and a second information set, wherein the second information set includes: position information, rotation information, and zoom information of the map information relative to the three-dimensional map model, the third set of information comprising: position information, rotation information and zoom information of map information in a first game scene; and a second adding unit for adding map information to the first game scene according to the third information set.
Optionally, the apparatus further includes: the processing module is used for carrying out image processing on the target map, wherein the image processing comprises at least one of the following steps: color superimposing processing, highlighting processing.
Optionally, the apparatus further includes: the receiving module is used for receiving an operation instruction acting on the three-dimensional map model; and the execution module is used for executing control operation corresponding to the operation instruction on the three-dimensional map model.
Optionally, when the operation instruction is a click instruction, the execution module includes: the second acquisition unit is used for acquiring a first two-dimensional coordinate of the screen clicking position under the screen coordinate system; the first conversion unit is used for converting the two-dimensional coordinates into first three-dimensional coordinates under world coordinates; a second conversion unit for converting the first three-dimensional coordinates into second two-dimensional coordinates in a UV coordinate system of the target object; a third conversion unit for converting the second two-dimensional coordinates into second three-dimensional coordinates in a three-dimensional space coordinate system of the first game scene; and the first execution unit is used for adding a marking model at the position where the second three-dimensional coordinate is.
Optionally, when the operation instruction is a move instruction, the execution module includes: the third acquisition unit is used for acquiring first position information, a first vector corresponding to the movement instruction and a first scale, wherein the first position information is an initial position of the three-dimensional map model before receiving the movement instruction; the first determining unit is used for determining second position information by adopting the first position information, the first vector and the first scale, wherein the second position information is a target position of the three-dimensional map model after receiving the moving instruction; and the second execution unit is used for executing moving operation on the three-dimensional map model according to the second position information.
Optionally, when the operation instruction is a rotation instruction, the execution module is configured to: when the second vector corresponding to the rotation instruction is located in the first interval, the three-dimensional map model is controlled to rotate around the first coordinate axis according to the first direction; when the second vector is located in the second interval, the three-dimensional map model is controlled to rotate around the first coordinate axis according to a second direction, wherein the second direction is opposite to the first direction; when the second vector is located in the third interval, the three-dimensional map model is controlled to rotate around the second coordinate axis according to the first direction; and when the second vector is positioned in the fourth interval, controlling the three-dimensional map model to rotate around the second coordinate axis according to the second direction.
Optionally, when the operation instruction is a zoom instruction, the execution module includes: a fourth obtaining unit, configured to obtain an initial scaling value, a scaling distance, and a second scale; a second determining unit, configured to determine a target scaling value using the initial scaling value, the scaling distance, and the second scale; and the third execution unit is used for executing scaling operation on the three-dimensional map model according to the target scaling value.
Optionally, the apparatus further includes: the acquisition module is used for acquiring an updated first information set obtained by executing control operation on the three-dimensional map model; the determining module is used for determining an updated third information set based on the updated first information set and determining updated scene contents contained in the first game scene according to the updated first information set and the updated third information set; and the rendering module is also used for rendering the updated scene content to the target map and rendering the target map to the target object.
According to an embodiment of the present invention, there is further provided a storage medium including a stored program, wherein the device in which the storage medium is controlled to execute the method for generating a three-dimensional game map according to any one of the above when the program runs.
According to an embodiment of the present invention, there is further provided a processor, where the processor is configured to run a program, where the program executes the method for generating a three-dimensional game map according to any one of the above.
According to one embodiment of the present invention, there is also provided a terminal including: the three-dimensional game map generation system comprises one or more processors, a memory, a display device and one or more programs, wherein the one or more programs are stored in the memory and are configured to be executed by the one or more processors, and the one or more programs are used for executing the three-dimensional game map generation method of any one of the above.
In at least some embodiments of the present invention, a mode of creating a blank game scene, adding a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model to the blank game scene is adopted, and by creating a rendering target, rendering scene content contained in a first game scene to a target map bound by the rendering target and rendering the target map to a target object in a current game interface, the purpose of realizing a 3D map effect in the game scene and simultaneously being compatible with multiple operation modes of a 2D map is achieved, thereby realizing the technical effect that map experience is more intuitive and smooth in terms of game performance and game operation, and further solving the technical problems that the two-dimensional game map implementation mode provided in the related art lacks intuitiveness and has limitation on the control operation of the game map.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
fig. 1 shows a hardware block diagram of a computer terminal (or mobile device) for implementing a method of generating a three-dimensional game map;
FIG. 2 is a flow chart of a method of generating a three-dimensional game map according to one embodiment of the present invention;
FIG. 3 is a schematic diagram of converting gamer click coordinates to map UV coordinates in accordance with an alternative embodiment of the present invention;
FIG. 4 is a schematic diagram of converting the UV coordinates of a map to 3D coordinates of a scene in which a 3D map model is located, according to an alternative embodiment of the present invention;
FIG. 5 is a schematic diagram of a rotation operation for a 3D map model in accordance with an alternative embodiment of the invention;
fig. 6 is a block diagram of a structure of a three-dimensional game map generating apparatus according to an embodiment of the present invention;
fig. 7 is a block diagram of a three-dimensional game map generation apparatus according to an alternative embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to one embodiment of the present invention, there is provided an embodiment of a method of generating a three-dimensional game map, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
The method embodiments may be performed in a mobile terminal, a computer terminal, or similar computing device. Fig. 1 shows a hardware block diagram of a computer terminal (or mobile device) for implementing a method of generating a three-dimensional game map. As shown in fig. 1, the computer terminal 10 (or mobile device) may include one or more processors 102 (shown as 102a, 102b, … …,102 n) which processor 102 may include, but is not limited to, a Central Processing Unit (CPU), a Graphics Processor (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), etc., a memory 104 for storing data, and a transmission means for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the computer terminal 10 (or mobile device) described above. For example, the computer terminal 10 (or mobile device) may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuits described above may be referred to generally herein as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated, in whole or in part, into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the present application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination to interface).
The memory 104 may be used to store software programs and modules of application software, such as a program instruction/data storage device corresponding to the method for generating a three-dimensional game map in the embodiment of the present invention, and the processor 102 executes the software programs and modules stored in the memory 104, thereby executing various functional applications and data processing, that is, implementing the method for generating a three-dimensional game map described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means is used for receiving or transmitting data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission means comprises a network adapter (Network Interface Controller, simply referred to as NIC) that can be connected to other network devices via a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device). In some embodiments, the computer terminal 10 (or mobile device) shown in FIG. 1 described above has a touch display (also referred to as a "touch screen" or "touch display"). In some embodiments, the computer terminal 10 (or mobile device) shown in FIG. 1 above has a Graphical User Interface (GUI) with which a user may interact with by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction functionality optionally includes the following interactions: executable instructions for performing the above-described human-machine interaction functions, such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music, and/or web browsing, are configured/stored in a computer program product or readable storage medium executable by one or more processors.
The computer terminal 10 (or mobile device) may be a smart phone (such as an Android phone, iOS phone, etc.), a tablet computer, a palm computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc.
In this embodiment, there is provided a method for generating a three-dimensional game map running on the computer terminal (or mobile device), and fig. 2 is a flowchart of a method for generating a three-dimensional game map according to one embodiment of the present invention, as shown in fig. 2, the method includes the steps of:
step S202, creating a first game scene, wherein the first game scene is a blank game scene;
step S204, adding a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model to the first game scene;
step S206, creating a rendering target, and rendering scene contents contained in the first game scene to a target map bound by the rendering target;
step S208, rendering the target map to the target object in the current game interface.
Through the steps, a blank game scene is created, a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model are added to the blank game scene, a rendering target is created, scene content contained in a first game scene is rendered to a target map bound by the rendering target, and the target map is rendered to a target object in a current game interface, so that the purpose of realizing a 3D map effect in the game scene and simultaneously being compatible with various operation modes of a 2D map is achieved, the technical effect that map experience is more visual and smooth in game performance and game operation is achieved, and the technical problems that the two-dimensional game map implementation mode provided in the related technology lacks of visual performance and has limitation on control operation of the game map are solved.
Optionally, in step S204, adding the three-dimensional map model to the first game scene may include performing the steps of:
step S2041, obtaining a first information set, wherein the first information set includes: position information, rotation information and scaling information of the three-dimensional map model in the first game scene;
step S2042, adding a three-dimensional map model to the first game scene in accordance with the first information set.
In order to realize the effect of the 3D map in the game, the 3D map model is required to be rendered, and corresponding map information (comprising place name information, other models or special effects which need to be identified in the map and the like) is required to be rendered at a specific position on the 3D map model. To this end, a blank game scene is created in the game, and a camera is provided in the blank game scene to facilitate the subsequent rendering of the 3D map model. Each game item may typically create multiple game scenes at the same time, but typically only the content covered by one of the game scenes can be displayed at the same time.
The purpose of creating the blank game scene is to: to add 3D map models, 3DUI, models, and special effects to the blank game scene alone. The blank game scene and the normal game scene are mutually independent. After the blank game scene is created, a 3D map model with a certain scale is also required to be created, the 3D map model is added into the created blank game scene based on the position information, rotation information and scaling information of the 3D map model in the blank game scene, and then the scene added with the 3D map model is rendered into a current game interface.
It should be noted that, the following two ways may be generally adopted to add the 3D map model to the current game interface:
in the first mode, the 3D map model is directly added to the current game interface.
And secondly, creating a blank game scene, adding the 3D map model into the blank game scene, and then rendering the scene added with the 3D map model into the current game interface.
Considering that additional effects are required to be added to the scene where the 3D map model is added later, including, but not limited to, special effects (such as a route, a preview line and a range circle) and models (such as a point selection mark) which are only used in the display process of the 3D map model, and special post-processing of the scene where the 3D map model is located are performed (such as rendering the scene where the 3D map model is located into a map separately, then adding four angles to the map to darken, and highlighting the post-processing effect of the intermediate map), in an alternative embodiment, a blank game scene is additionally created, and then the 3D map model is added to the blank game scene.
Alternatively, in step S204, adding map information to the first game scene may include performing the steps of:
Step S2043, calculating a third information set using the first information set and a second information set, wherein the second information set includes: position information, rotation information, and zoom information of the map information relative to the three-dimensional map model, the third set of information comprising: position information, rotation information and zoom information of map information in a first game scene;
step S2044, adding map information to the first game scene in accordance with the third information set.
In an alternative embodiment, creating map information pertaining to a 3D map model includes: place names (i.e., 3DUI mainly refers to place names used in 3D map models, which differ from 2DUI in that 3DUI resembles a 3D model, which can be added to a 3D scene, while 3DUI also has 3D position, 3D rotation, etc.), labels (special effects or models), etc. The relative position and relative rotation information between these map information and the 3D map model are calculated and added to the blank game scene that has been created. Here, a matrix may be employed to represent a matrix relationship between map information and a 3D map model. Hypothesis matrix M map Representing the position, rotation and scaling (corresponding to the first information set) of the 3D map model in the blank game scene, and M other For representing the position, rotation and scaling (corresponding to the second set of information) of the map information on the 3D map model relative to the 3D map model, then finally M is adopted other *M map To represent map information in blank game scenesPosition, rotation and scaling (corresponding to the third set of information described above).
Optionally, after rendering the scene content included in the first game scene to the target map in step S206, the following steps may be further performed:
step S207, performing image processing on the target map, wherein the image processing comprises at least one of the following steps: color superimposing processing, highlighting processing.
Any one game scene created in the game needs to specify a rendering target (RenderTarget). A rendering object binds a map and the content contained in the game scene is rendered into the bound map. A default rendering target (Default RenderTarget) is typically set in the game to render the main scene in the game to the default rendering target, and the content displayed by the game interface is typically a specific map to which the default rendering target is bound. Therefore, a rendering target needs to be created for the blank game scene, and all the added contents (including a map model, a 3DUI, special effects, a model and the like) in the blank game scene are rendered into a map bound with the rendering target. Since this alternative embodiment adds the 3D map model and some corresponding map information to a blank game scene, the scene in which the 3D map model is located needs to be rendered into a target map bound by a designated RenderTarget.
In an alternative embodiment, some image effect processing may be performed on the rendered map, including but not limited to: color superimposing processing, highlighting processing, and the like. And then rendering the processed 2D map with the 3D map model and the special map information on the current game interface. For example: the final 3D map effect is achieved by rendering the 2D map to a target object, such as the glass of the cockpit. For another example: the 2D map may be rendered directly onto the current game interface as a UI eidolon node.
Optionally, after rendering the target map to the target object in step S208, the following steps may be further included:
step S209, receiving an operation instruction acting on the three-dimensional map model;
step S210, a control operation corresponding to the operation instruction is performed on the three-dimensional map model.
Corresponding control operations are defined according to different types of operation platforms. Here, the types of the operation platform mainly include: personal computers (PC platforms) and mobile platforms. In general, control operations of the PC platform for the display screen are performed using a mouse, and control operations of the mobile platform for the display screen are performed using multi-finger touch. Thus, it is necessary to define corresponding operation schemes according to different types of operation platforms, respectively.
Whereas different 3D map control operations are defined on the PC platform primarily in terms of different mouse operations, they may include, but are not limited to: clicking the mouse is marking point selection operation, pressing a left button of the mouse and sliding the left button into movement operation, pressing a right button of the mouse and sliding the right button into rotation operation, and operating the mouse wheel into zooming operation. While different 3D map control operations are defined on the mobile platform primarily according to different multi-finger touch operations, they may include, but are not limited to: the single-finger touch click is a mark point selection operation, the single-finger click and slide is a movement operation, the double-finger click and slide is a rotation operation, and the double-finger distance is scaled to be a scaling operation. Thus, according to different operation instructions, corresponding operations are executed on the 3D map model.
Alternatively, when the operation instruction is a click instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model in step S210 may include performing the steps of:
step S2100, obtaining a first two-dimensional coordinate of a screen click position under a screen coordinate system;
step S2101, converting the first two-dimensional coordinates into first three-dimensional coordinates in world coordinates;
step S2102, converting the first three-dimensional coordinate into a second two-dimensional coordinate under a UV coordinate system of a target object;
Step S2103, converting the second two-dimensional coordinates into second three-dimensional coordinates in the three-dimensional space coordinate system of the first game scene;
in step S2104, a marker model is added to the location of the second three-dimensional coordinate.
When the operation instruction is a click instruction, the 3D map model and the scene where the 3D map model is located are required to be rendered on a piece of 2D map, and the piece of 2D map is rendered on glass of the cockpit. Thus, for the click marking operation, this is typically done in the following manner: for 3D games, if a game player clicks on a screen, the actual intent of the game player is to click on a specific 3D location in the game scene, whereas cameras in current 3D game engines are typically provided with an interface (i.e. screen posto world pos) that functions to convert the 2D planar coordinates of the screen into the 3D world coordinates of the game scene. Thus, when a gamer clicks on the current screen (cell phone screen or computer screen), the gamer's actual intention is to actually click on a specific location on the cockpit glass. At this point, the interface screenpostoworld pos in the camera of the current scene needs to be invoked to perform the first conversion.
Fig. 3 is a schematic diagram of converting a click coordinate of a game player into a mapped UV coordinate according to an alternative embodiment of the present invention, as shown in fig. 3, assuming that a screen coordinate clicked by the game player is (x, y) corresponding to the first two-dimensional coordinate, and a coordinate obtained by the first conversion of the camera is (x 1, y1, z 1), where (x 1, y1, z 1) is a coordinate of a corresponding point on the cabin glass corresponding to the first three-dimensional coordinate. Meanwhile, (x 1, y1, z 1) can also be converted into uv coordinates (u, v) used on the glass plane by using the engine interface, which corresponds to the second two-dimensional coordinates described above.
In addition, the map on the glass is exactly one 2D map obtained by rendering the scene where the 3D map model is located. FIG. 4 is a schematic diagram of converting UV coordinates of a map into 3D coordinates of a scene where a 3D map model is located according to an alternative embodiment of the present invention, as shown in FIG. 4, a camera set in the blank game scene may be used to recall the Screen PosToWorldPos to convert coordinates (u, v) into 3D space coordinates (fx, fy, fz) used in the blank game scene, which corresponds to the second three-dimensional coordinates, where the (fx, fy, fz) is a specific location that the game player finally wants to click on. Therefore, only one mark model (which is an arrow model used for representing that a user clicks a specific position of the 3D map) is added at a specific position in the blank game scene, so that the point selecting operation of the 3D map can be completed.
Alternatively, when the operation instruction is a movement instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model in step S210 may include performing the steps of:
step S2105, acquiring first position information, a first vector corresponding to a movement instruction and a first scale, wherein the first position information is an initial position of the three-dimensional map model before receiving the movement instruction;
Step S2106, determining second position information by adopting the first position information, the first vector and the first scale, wherein the second position information is a target position of the three-dimensional map model after receiving the moving instruction;
step S2107, performing a moving operation on the three-dimensional map model according to the second position information.
When the operation instruction is a move instruction, it is easier to implement relative to the point selection operation for the move operation. On the PC platform, a game player presses a left button of a mouse and slides; on the mobile platform, the game player clicks and slides on the screen with a single finger. The target position can be determined by recording the direction vector of the slide and the slide distance. Assuming that the sliding direction vector is (ax, ay), which corresponds to the first vector, and the sliding scale is defined as k (which corresponds to the first scale), the initial position of the 3D map model (which corresponds to the first position information) is (x 0, y0, z 0), the moved target position (which corresponds to the second position information) is: (x 1, y1, z 1) = (x 0, y0, z 0) +k (ax, ay, 0).
Alternatively, when the operation instruction is a rotation instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model in step S210 may include performing the steps of:
Step S2108, when the second vector corresponding to the rotation instruction is located in the first section, controlling the three-dimensional map model to rotate around the first coordinate axis according to the first direction; when the second vector is located in the second interval, the three-dimensional map model is controlled to rotate around the first coordinate axis according to a second direction, wherein the second direction is opposite to the first direction; when the second vector is located in the third interval, the three-dimensional map model is controlled to rotate around the second coordinate axis according to the first direction; and when the second vector is positioned in the fourth interval, controlling the three-dimensional map model to rotate around the second coordinate axis according to the second direction.
When the operation instruction is a rotation instruction, for the rotation operation, similar to the movement operation, on the PC platform, the game player holds the right mouse button and slides; on the mobile platform, the game player clicks and slides with two fingers on the screen. Fig. 5 is a schematic diagram of a rotation operation for a 3D map model according to an alternative embodiment of the present invention, as shown in fig. 5, since the rotation operation can be simultaneously performed around multiple axes, taking a two-dimensional rectangular coordinate system of a screen as an example, when a sliding vector is in an α1 interval (which corresponds to the first interval), for example: (45 degrees, 135 degrees), the 3D map model rotates forward (which corresponds to the first direction described above) about the x-axis; when the sliding vector is in the α2 interval (which corresponds to the second interval described above), for example: (-135 degrees, -45 degrees), the 3D map model is rotated reversely around the x-axis (which corresponds to the above second direction). When the sliding vector is in the β1 region (which corresponds to the third region described above), for example: (-45 degrees, 45 degrees), the 3D map model is rotated forward about the y-axis; when the sliding vector is in the β2 region (which corresponds to the fourth region described above), for example: (135 degrees, -135 degrees), the 3D map model is rotated reversely around the y-axis.
Alternatively, when the operation instruction is a zoom instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model in step S210 may include performing the steps of:
step S2109, obtaining an initial scaling value, a scaling distance and a second scale;
step S2110, determining a target scaling value by adopting the initial scaling value, the scaling distance and the second scale;
step S2111, a scaling operation is performed on the three-dimensional map model in accordance with the target scaling value.
When operatingWhen the instruction is a zoom instruction, the zoom operation is performed by a roller operation on the PC platform, and the zoom operation is performed by a double-finger zoom or a zoom-out distance on the mobile platform. Assuming that the zoom distance of the roller or the double finger is dx, the zoom scale is k (corresponding to the second scale), and the initial zoom value is S O The target scaling value is S O +dx*k。
Optionally, after performing the control operation corresponding to the operation instruction on the three-dimensional map model in step S210, the following performing step may be further included:
step S211, obtaining an updated first information set obtained by executing control operation on the three-dimensional map model;
step S212, determining an updated third information set based on the updated first information set;
Step S213, determining updated scene content contained in the first game scene according to the updated first information set and the updated third information set;
step S214, rendering the updated scene content to a target map;
step S215, render the target map to the target object.
Matrix M due to 3D map model map Because of the change, the spatial position, rotation, scaling, etc. of other marks (such as place names, heading, etc.) with the 3D map model as a parent node also need to be updated synchronously. The scene content is then re-rendered into the target map bound by the RenderTarget. And finally, refreshing the content of the target map bound by the RenderTarget onto the glass of the cockpit, thereby realizing the control operation of the 3D map model.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiment also provides a device for generating a three-dimensional game map, which is used for realizing the above embodiment and the preferred implementation manner, and the description is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 6 is a block diagram of a three-dimensional game map generation apparatus according to one embodiment of the present invention, as shown in fig. 6, the apparatus including: a first creating module 10, configured to create a first game scene, where the first game scene is a blank game scene; an adding module 20 for adding a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model to a first game scene; a second creating module 30, configured to create a rendering target, and render scene content included in the first game scene to a target map bound to the rendering target; and a rendering module 40, configured to render the target map to a target object in the current game interface.
Optionally, the adding module 20 includes: a first acquisition unit (not shown in the figure) for acquiring a first set of information, wherein the first set of information includes: position information, rotation information and scaling information of the three-dimensional map model in the first game scene; a first adding unit (not shown in the figure) for adding the three-dimensional map model to the first game scene according to the first set of information.
Optionally, the adding module 20 includes: a calculating unit (not shown in the figure) for calculating a third set of information using the first set of information and a second set of information, wherein the second set of information comprises: position information, rotation information, and zoom information of the map information relative to the three-dimensional map model, the third set of information comprising: position information, rotation information and zoom information of map information in a first game scene; a second adding unit (not shown in the figure) for adding map information to the first game scene in accordance with the third information set.
Alternatively, fig. 7 is a block diagram of a three-dimensional game map generating apparatus according to an alternative embodiment of the present invention, and as shown in fig. 7, the apparatus further includes: a processing module 50, configured to perform image processing on the target map, where the image processing includes at least one of: color superimposing processing, highlighting processing.
Optionally, as shown in fig. 7, the apparatus further includes: a receiving module 60 for receiving an operation instruction acting on the three-dimensional map model; the execution module 70 is configured to execute a control operation corresponding to the operation instruction on the three-dimensional map model.
Optionally, when the operation instruction is a click instruction, the execution module 70 includes: a second acquisition unit (not shown in the figure) for acquiring a first two-dimensional coordinate of the screen click position in the screen coordinate system; a first conversion unit (not shown in the figure) for converting the two-dimensional coordinates into first three-dimensional coordinates in world coordinates; a second converting unit (not shown in the figure) for converting the first three-dimensional coordinates into second two-dimensional coordinates in the UV coordinate system of the target object; a third conversion unit (not shown in the figure) for converting the second two-dimensional coordinates into second three-dimensional coordinates in the three-dimensional space coordinate system of the first game scene; a first execution unit (not shown in the figure) for adding a marker model at the location of the second three-dimensional coordinates.
Optionally, when the operation instruction is a move instruction, the execution module 70 includes: a third obtaining unit (not shown in the figure) for obtaining first position information, a first vector corresponding to the movement instruction, and a first scale, wherein the first position information is an initial position of the three-dimensional map model before receiving the movement instruction; a first determining unit (not shown in the figure) for determining second position information using the first position information, the first vector, and the first scale, wherein the second position information is a target position of the three-dimensional map model after receiving the movement instruction; and a second execution unit (not shown in the figure) for executing a moving operation on the three-dimensional map model in accordance with the second position information.
Optionally, when the operation instruction is a rotation instruction, the execution module 70 is configured to: when the second vector corresponding to the rotation instruction is located in the first interval, the three-dimensional map model is controlled to rotate around the first coordinate axis according to the first direction; when the second vector is located in the second interval, the three-dimensional map model is controlled to rotate around the first coordinate axis according to a second direction, wherein the second direction is opposite to the first direction; when the second vector is located in the third interval, the three-dimensional map model is controlled to rotate around the second coordinate axis according to the first direction; and when the second vector is positioned in the fourth interval, controlling the three-dimensional map model to rotate around the second coordinate axis according to the second direction.
Optionally, when the operation instruction is a zoom instruction, the execution module 70 includes: a fourth acquisition unit (not shown in the figure) for acquiring the initial scaling value, the scaling distance, and the second scale; a second determining unit (not shown in the figure) for determining a target scaling value using the initial scaling value, the scaling distance and the second scale; a third execution unit (not shown in the figure) for performing a scaling operation on the three-dimensional map model according to the target scaling value.
Optionally, as shown in fig. 7, the apparatus further includes: an acquisition module 80, configured to acquire an updated first information set obtained by performing a control operation on the three-dimensional map model; a determining module 90, configured to determine an updated third information set based on the updated first information set, and determine updated scene content included in the first game scene according to the updated first information set and the updated third information set; the rendering module 40 is further configured to render the updated scene content to a target map and render the target map to a target object.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
An embodiment of the invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
s1, creating a first game scene, wherein the first game scene is a blank game scene;
s2, adding a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model to a first game scene;
s3, creating a rendering target, and rendering scene contents contained in the first game scene to a target map bound by the rendering target;
and S4, rendering the target map to a target object in the current game interface.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Embodiments of the invention also provide a processor arranged to run a computer program to perform the steps of any of the method embodiments described above.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, creating a first game scene, wherein the first game scene is a blank game scene;
s2, adding a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model to a first game scene;
s3, creating a rendering target, and rendering scene contents contained in the first game scene to a target map bound by the rendering target;
and S4, rendering the target map to a target object in the current game interface.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (21)

1. A method for generating a three-dimensional game map, comprising:
creating a first game scene, wherein the first game scene is a blank game scene;
adding a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model to the first game scene, wherein the map information comprises at least one of the following information: place name information, model information or special effect information;
creating a rendering target, and rendering scene contents contained in the first game scene to a target map bound by the rendering target;
rendering the target map to a target object in a current game interface;
wherein adding a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model to the first game scene includes: acquiring a first information set, wherein the first information set comprises: the three-dimensional map model is in the position information, rotation information and scaling information of the first game scene; adding the three-dimensional map model to the first game scene according to the first information set; calculating a third information set by adopting the first information set and a second information set, wherein the second information set comprises: position information, rotation information and zoom information of the map information relative to the three-dimensional map model, the third set of information comprising: position information, rotation information and zoom information of the map information in the first game scene; and adding the map information to the first game scene according to the third information set.
2. The method of claim 1, further comprising, after rendering scene content contained in the first game scene to the target map:
performing image processing on the target map, wherein the image processing comprises at least one of the following: color superimposing processing, highlighting processing.
3. The method of claim 1, further comprising, after rendering the target map to the target object:
receiving an operation instruction acting on the three-dimensional map model;
and executing control operation corresponding to the operation instruction on the three-dimensional map model.
4. The method of claim 3, wherein when the operation instruction is a click instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model includes:
acquiring a first two-dimensional coordinate of a screen click position under a screen coordinate system;
converting the two-dimensional coordinates into first three-dimensional coordinates under world coordinates;
converting the first three-dimensional coordinate into a second two-dimensional coordinate under a UV coordinate system of the target object;
converting the second two-dimensional coordinates into second three-dimensional coordinates in a three-dimensional space coordinate system of the first game scene;
And adding a marking model at the position where the second three-dimensional coordinate is located.
5. The method of claim 3, wherein when the operation instruction is a movement instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model includes:
acquiring first position information, a first vector corresponding to the movement instruction and a first scale, wherein the first position information is an initial position of the three-dimensional map model before receiving the movement instruction;
determining second position information by adopting the first position information, the first vector and the first scale, wherein the second position information is a target position of the three-dimensional map model after receiving the moving instruction;
and executing moving operation on the three-dimensional map model according to the second position information.
6. The method of claim 3, wherein when the operation instruction is a rotation instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model includes:
when the second vector corresponding to the rotation instruction is located in the first interval, the three-dimensional map model is controlled to rotate around the first coordinate axis according to the first direction;
When the second vector is located in a second interval, the three-dimensional map model is controlled to rotate around the first coordinate axis according to a second direction, wherein the second direction is opposite to the first direction;
when the second vector is located in a third interval, the three-dimensional map model is controlled to rotate around a second coordinate axis according to the first direction;
and when the second vector is positioned in the fourth interval, controlling the three-dimensional map model to rotate around the second coordinate axis according to the second direction.
7. The method of claim 3, wherein when the operation instruction is a zoom instruction, performing a control operation corresponding to the operation instruction on the three-dimensional map model comprises:
acquiring an initial scaling value, a scaling distance and a second scale;
determining a target scaling value by adopting the initial scaling value, the scaling distance and the second scale;
and performing scaling operation on the three-dimensional map model according to the target scaling value.
8. The method according to claim 3, further comprising, after performing a control operation corresponding to the operation instruction on the three-dimensional map model:
Acquiring an updated first information set obtained by executing the control operation on the three-dimensional map model;
determining an updated third information set based on the updated first information set;
determining updated scene content contained in the first game scene according to the updated first information set and the updated third information set;
rendering the updated scene content to the target map;
and rendering the target map to the target object.
9. A three-dimensional game map generation apparatus, comprising:
the first creation module is used for creating a first game scene, wherein the first game scene is a blank game scene;
an adding module, configured to add a preconfigured three-dimensional map model and map information corresponding to the three-dimensional map model to the first game scene, where the map information includes at least one of the following information: place name information, model information or special effect information;
the second creation module is used for creating a rendering target and rendering scene contents contained in the first game scene to a target map bound by the rendering target;
The rendering module is used for rendering the target map to a target object in the current game interface;
wherein, add the module and also be used for: acquiring a first information set, wherein the first information set comprises: the three-dimensional map model is in the position information, rotation information and scaling information of the first game scene; adding the three-dimensional map model to the first game scene according to the first information set; calculating a third information set by adopting the first information set and a second information set, wherein the second information set comprises: position information, rotation information and zoom information of the map information relative to the three-dimensional map model, the third set of information comprising: position information, rotation information and zoom information of the map information in the first game scene; and adding the map information to the first game scene according to the third information set.
10. The apparatus of claim 9, wherein the adding module comprises:
a first obtaining unit, configured to obtain a first information set, where the first information set includes: the three-dimensional map model is in the position information, rotation information and scaling information of the first game scene;
And the first adding unit is used for adding the three-dimensional map model to the first game scene according to the first information set.
11. The apparatus of claim 10, wherein the adding module comprises:
a calculating unit, configured to calculate a third information set using the first information set and a second information set, where the second information set includes: position information, rotation information and zoom information of the map information relative to the three-dimensional map model, the third set of information comprising: position information, rotation information and zoom information of the map information in the first game scene;
and a second adding unit configured to add the map information to the first game scene according to the third information set.
12. The apparatus of claim 10, wherein the apparatus further comprises:
the processing module is used for carrying out image processing on the target map, wherein the image processing comprises at least one of the following steps: color superimposing processing, highlighting processing.
13. The apparatus of claim 9, wherein the apparatus further comprises:
the receiving module is used for receiving an operation instruction acting on the three-dimensional map model;
And the execution module is used for executing control operation corresponding to the operation instruction on the three-dimensional map model.
14. The apparatus of claim 13, wherein when the operation instruction is a click instruction, the execution module comprises:
the second acquisition unit is used for acquiring a first two-dimensional coordinate of the screen clicking position under the screen coordinate system;
the first conversion unit is used for converting the two-dimensional coordinates into first three-dimensional coordinates under world coordinates;
a second conversion unit configured to convert the first three-dimensional coordinate into a second two-dimensional coordinate in a UV coordinate system of the target object;
a third conversion unit, configured to convert the second two-dimensional coordinate into a second three-dimensional coordinate in a three-dimensional space coordinate system of the first game scene;
and the first execution unit is used for adding a marking model at the position where the second three-dimensional coordinate is located.
15. The apparatus of claim 13, wherein when the operation instruction is a move instruction, the execution module comprises:
the third acquisition unit is used for acquiring first position information, a first vector corresponding to the movement instruction and a first scale, wherein the first position information is an initial position of the three-dimensional map model before the movement instruction is received;
A first determining unit for determining using the first position information, the first vector and the first scale
Second position information, wherein the second position information is a target position of the three-dimensional map model after receiving the movement instruction;
and the second execution unit is used for executing moving operation on the three-dimensional map model according to the second position information.
16. The apparatus of claim 13, wherein when the operation instruction is a rotation instruction, the execution module is to:
when the second vector corresponding to the rotation instruction is located in the first interval, the three-dimensional map model is controlled to rotate around the first coordinate axis according to the first direction;
when the second vector is located in a second interval, the three-dimensional map model is controlled to rotate around the first coordinate axis according to a second direction, wherein the second direction is opposite to the first direction;
when the second vector is located in a third interval, the three-dimensional map model is controlled to rotate around a second coordinate axis according to the first direction;
and when the second vector is positioned in the fourth interval, controlling the three-dimensional map model to rotate around the second coordinate axis according to the second direction.
17. The apparatus of claim 13, wherein when the operation instruction is a zoom instruction, the execution module comprises:
a fourth obtaining unit, configured to obtain an initial scaling value, a scaling distance, and a second scale;
a second determining unit configured to determine a target scaling value using the initial scaling value, the scaling distance, and the second scale;
and the third execution unit is used for executing scaling operation on the three-dimensional map model according to the target scaling value.
18. The apparatus of claim 13, wherein the apparatus further comprises:
the acquisition module is used for acquiring an updated first information set obtained by executing the control operation on the three-dimensional map model;
a determining module, configured to determine an updated third information set based on the updated first information set, and determine updated scene content included in the first game scene according to the updated first information set and the updated third information set;
the rendering module is further configured to render the updated scene content to the target map, and render the target map to the target object.
19. A storage medium comprising a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the method of generating a three-dimensional game map according to any one of claims 1 to 8.
20. A processor for executing a program, wherein the program when executed performs the method of generating a three-dimensional game map according to any one of claims 1 to 8.
21. A terminal, comprising: one or more processors, a memory, a display device, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs for performing the method of generating the three-dimensional game map of any of claims 1 to 8.
CN201910309444.6A 2019-04-17 2019-04-17 Three-dimensional game map generation method and device, processor and terminal Active CN109939440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910309444.6A CN109939440B (en) 2019-04-17 2019-04-17 Three-dimensional game map generation method and device, processor and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910309444.6A CN109939440B (en) 2019-04-17 2019-04-17 Three-dimensional game map generation method and device, processor and terminal

Publications (2)

Publication Number Publication Date
CN109939440A CN109939440A (en) 2019-06-28
CN109939440B true CN109939440B (en) 2023-04-25

Family

ID=67015538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910309444.6A Active CN109939440B (en) 2019-04-17 2019-04-17 Three-dimensional game map generation method and device, processor and terminal

Country Status (1)

Country Link
CN (1) CN109939440B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110660118A (en) * 2019-09-25 2020-01-07 网易(杭州)网络有限公司 Game asset processing method, device, equipment and readable storage medium
CN110751052A (en) * 2019-09-25 2020-02-04 恒大智慧科技有限公司 Tourist area guide pushing method, tourist area guide pushing system and storage medium
CN111803952A (en) * 2019-11-21 2020-10-23 厦门雅基软件有限公司 Topographic map editing method and device, electronic equipment and computer readable medium
CN111107419B (en) * 2019-12-31 2021-03-02 福州大学 Method for adding marked points instantly based on panoramic video playing
CN111228816A (en) * 2020-02-10 2020-06-05 郑州阿帕斯数云信息科技有限公司 Scene layout method and device in game
CN111292389B (en) * 2020-02-19 2023-07-25 网易(杭州)网络有限公司 Image processing method and device
CN111701238B (en) * 2020-06-24 2022-04-26 腾讯科技(深圳)有限公司 Virtual picture volume display method, device, equipment and storage medium
CN111803945B (en) * 2020-07-23 2024-02-09 网易(杭州)网络有限公司 Interface rendering method and device, electronic equipment and storage medium
CN112138387A (en) * 2020-09-22 2020-12-29 网易(杭州)网络有限公司 Image processing method, device, equipment and storage medium
CN114445500B (en) * 2020-10-30 2023-11-10 北京字跳网络技术有限公司 Augmented reality scene construction method, device, terminal equipment and storage medium
CN112704874B (en) * 2020-12-21 2023-09-22 北京信息科技大学 Method and device for automatically generating Gotty scene in 3D game
CN113034658B (en) * 2021-03-30 2022-10-04 完美世界(北京)软件科技发展有限公司 Method and device for generating model map
CN113192168A (en) * 2021-04-01 2021-07-30 广州三七互娱科技有限公司 Game scene rendering method and device and electronic equipment
CN113476848B (en) * 2021-07-08 2023-11-17 网易(杭州)网络有限公司 Tree chain map generation method and device, storage medium and electronic equipment
CN113813608B (en) * 2021-10-12 2023-09-15 福建天晴在线互动科技有限公司 Method and system for shrinking 2D game map

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1838176A (en) * 2006-04-06 2006-09-27 胡小云 Method for making urban three-dimensional dynamic traveling network map
CN108986194A (en) * 2018-07-24 2018-12-11 合肥爱玩动漫有限公司 A kind of scene of game rendering method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
怡丹科技工作室编著.渲染小场景.《3ds max8室内装饰设计入门与提高》.2007, *

Also Published As

Publication number Publication date
CN109939440A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN109939440B (en) Three-dimensional game map generation method and device, processor and terminal
JP6875346B2 (en) Information processing methods and devices, storage media, electronic devices
US9436369B2 (en) Touch interface for precise rotation of an object
US20120249542A1 (en) Electronic apparatus to display a guide with 3d view and method thereof
Li et al. Cognitive issues in mobile augmented reality: an embodied perspective
US11604580B2 (en) Configuration of application execution spaces and sub-spaces for sharing data on a mobile touch screen device
CN111062778A (en) Product browsing method, device, equipment and storage medium
US20180046351A1 (en) Controlling display object on display screen
EP4121949A1 (en) 3d cutout image modification
CN111803945A (en) Interface rendering method and device, electronic equipment and storage medium
CN112230909A (en) Data binding method, device and equipment of small program and storage medium
EP4273808A1 (en) Method and apparatus for publishing video, device, and medium
Hoberman et al. Immersive training games for smartphone-based head mounted displays
US10614633B2 (en) Projecting a two-dimensional image onto a three-dimensional graphical object
CN112965780A (en) Image display method, apparatus, device and medium
CN106548504B (en) Webpage animation generation method and device
KR20160050295A (en) Method for Simulating Digital Watercolor Image and Electronic Device Using the same
CN112184852A (en) Auxiliary drawing method and device based on virtual imaging, storage medium and electronic device
CN111179438A (en) AR model dynamic fixing method and device, electronic equipment and storage medium
CN112911052A (en) Information sharing method and device
CN113741775A (en) Image processing method and device and electronic equipment
CN112558844B (en) Tablet computer-based medical image reading method and system
CN113126863A (en) Object selection implementation method and device, storage medium and electronic equipment
CN112667942A (en) Animation generation method, device and medium
EP3506261A1 (en) Information processing program, information processing system and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant