CN117398685A - Virtual scene construction method and device, electronic equipment and storage medium - Google Patents

Virtual scene construction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117398685A
CN117398685A CN202311437770.8A CN202311437770A CN117398685A CN 117398685 A CN117398685 A CN 117398685A CN 202311437770 A CN202311437770 A CN 202311437770A CN 117398685 A CN117398685 A CN 117398685A
Authority
CN
China
Prior art keywords
scene
path
module
scene module
intersection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311437770.8A
Other languages
Chinese (zh)
Inventor
张同强
黄茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311437770.8A priority Critical patent/CN117398685A/en
Publication of CN117398685A publication Critical patent/CN117398685A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a virtual scene construction method, a virtual scene construction device, electronic equipment and a storage medium. The method comprises the following steps: splicing a plurality of scene module plane diagrams according to a pre-acquired unit pixel module; performing 3D (three-dimensional) operation on the scene module plane graph according to scene information in the scene module plane graph to construct a plurality of scene modules corresponding to the plurality of scene module plane graphs; wherein, each scene module at least comprises: a path; determining path information corresponding to paths in each scene module; and acquiring scene demand parameters, and splicing scene modules corresponding to the path information according to the scene demand parameters to construct a target virtual scene. According to the method, the target virtual scene is divided into the plurality of scene modules to be edited respectively, so that the editing cost is reduced, and a plurality of developers can be handed over to edit each scene module respectively, so that the editing period is shortened to a certain extent.

Description

Virtual scene construction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a virtual scene construction method, a virtual scene construction device, an electronic device, and a storage medium.
Background
The virtual scene is a virtual space for the player to play the game, the game developer can pre-make the game scene in the development stage, and when the player plays the actual game, the pre-made scene can be loaded for the player to experience.
In the related art, a virtual scene is usually made by using an entire scene, for example, editing the entire scene screen by screen, and a developer needs to edit a path for a player to walk according to the entire scene by manually selecting.
However, the related art method for creating a scene consumes a large amount of cost and time, and when a vulnerability occurs in a certain area in the virtual scene, it is difficult to accurately find out the specific vulnerability position.
Disclosure of Invention
In view of the foregoing, an object of the present application is to provide a virtual scene construction method, apparatus, electronic device, and storage medium.
In view of the above object, in a first aspect, the present application provides a virtual scene construction method, including:
splicing a plurality of scene module plane diagrams according to a pre-acquired unit pixel module;
performing 3D (three-dimensional) operation on the scene module plan according to scene information in the scene module plan so as to construct a plurality of scene modules corresponding to a plurality of scene module plan; wherein, each scene module at least comprises: a path;
Determining path information corresponding to the paths in each scene module; the method comprises the steps of,
and acquiring scene demand parameters, and splicing the scene modules corresponding to the path information according to the scene demand parameters to construct a target virtual scene.
In a second aspect, the present application provides a virtual scene construction apparatus, the apparatus comprising:
the splicing module is configured to splice a plurality of scene module plane diagrams according to the pre-acquired unit pixel modules;
a 3D-processing module configured to perform 3D-processing on the scene module plan according to scene information in the scene module plan to construct a plurality of scene modules corresponding to a plurality of the scene module plan; wherein, each scene module at least comprises: a path;
the determining module is configured to determine path information corresponding to the paths in each scene module;
the construction module is configured to acquire scene demand parameters and splice the scene modules corresponding to the path information according to the scene demand parameters so as to construct a target virtual scene.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the virtual scene construction method according to the first aspect when executing the program.
In a fourth aspect, the present application provides a computer-readable storage medium storing computer instructions for causing a computer to perform the virtual scene construction method according to the first aspect.
From the above, it can be seen that, according to the virtual scene construction method, device, electronic device and storage medium provided by the present application, a plurality of scene module plan views are obtained by splicing the unit pixel modules obtained in advance; performing 3D (three-dimensional) operation on the scene module plan according to scene information in the scene module plan so as to construct a plurality of scene modules corresponding to a plurality of scene module plan; wherein, each scene module at least comprises: a path; determining path information corresponding to the paths in each scene module; and acquiring scene demand parameters, and splicing the scene modules corresponding to the path information according to the scene demand parameters to construct a target virtual scene. According to the method and the device, the scene module plan is obtained through unit pixel module splicing, and compared with the scene design changing in a 3D scene, the scene module plan can be changed rapidly. And the target virtual scene is divided into a plurality of scene modules to be edited respectively, so that the editing cost is reduced, and each scene module can be edited respectively by a plurality of developers, so that the editing period is shortened to a certain extent. Further, by splicing different scene modules, a complete target virtual scene is obtained, and because the scene modules spliced with each other can be freely selected in the splicing process, the generation effect of the whole target virtual scene is more diversified, different target virtual scenes can be spliced through a plurality of different scene modules in the same group, and the virtual scenes are richer and more diversified.
Drawings
In order to more clearly illustrate the technical solutions of the present application or related art, the drawings that are required to be used in the description of the embodiments or related art will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
Fig. 1 shows a schematic diagram of an automated generation scenario in the related art.
Fig. 2 illustrates an exemplary application scenario schematic diagram of a virtual scenario construction method provided in an embodiment of the present application.
Fig. 3 is an exemplary flowchart of a virtual scene construction method according to an embodiment of the present application.
Fig. 4 shows an exemplary schematic diagram of a unit pixel module in an embodiment according to the present application.
Fig. 5 shows an exemplary schematic of a scene module plan in an embodiment according to the application.
Fig. 6 shows an exemplary schematic diagram of a refined scene module plan in accordance with an embodiment of the application.
Fig. 7 (a) shows an exemplary schematic diagram of a detailed scene module plan containing the same number of intersections in an embodiment according to the present application.
Fig. 7 (b) shows an exemplary schematic diagram of another refined scene module plan containing the same number of intersections in an embodiment according to the present application.
Fig. 7 (c) shows an exemplary schematic diagram of still another refined scene module plan containing the same number of intersections in an embodiment according to the present application.
Fig. 8 shows an exemplary schematic diagram of a white-box scene module in an embodiment in accordance with the application.
FIG. 9 illustrates an exemplary schematic diagram of a virtual asset in an embodiment in accordance with the application.
Fig. 10 shows an exemplary schematic of a scene module in an embodiment according to the application.
FIG. 11 illustrates an exemplary schematic diagram of a stitched, refined scene module plan in accordance with embodiments of the application.
Fig. 12 shows an exemplary schematic of a tiled white-boxed scene module in accordance with embodiments of the present application.
Fig. 13 shows an exemplary schematic diagram of a spliced portion of a target virtual scene in accordance with an embodiment of the present application.
FIG. 14 illustrates an exemplary schematic of an operational interface of running test software in accordance with an embodiment of the present application.
Fig. 15 shows an exemplary structural schematic diagram of a virtual scene building apparatus provided in an embodiment of the present application.
Fig. 16 shows an exemplary structural schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present application should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present application belongs. The terms "first," "second," and the like, as used in embodiments of the present application, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As described in the background section, the virtual scene is a virtual space for the player to play the game, and the game developer can make a game scene in advance in the development stage, and load the prefabricated scene for the player to experience when the player plays the actual game.
In the related art, a virtual scene is usually made by using an entire scene, for example, editing the entire scene screen by screen, and a developer needs to edit a path for a player to walk according to the entire scene by manually selecting.
As a result of the study by the inventors, it was found that, in the related art, when a very large virtual scene, for example, a scene having a scene size of (1000×1000) meters is created, the entire scene can be photographed when a virtual camera in the scene is located at a high altitude of 3000 meters to take an overhead. Then, in the process of editing such huge virtual scenes, the related art needs to load the entire scene, enlarge a part of the scene, edit the scene by screen, and even cause serious engine jam due to excessive scene loading resources. In such a scene editing mode, the cost of resources and time used is high, and the step of scene editing can only satisfy single-person single-thread operation, and the engine file can only be maintained by one game developer, so that the virtual scene needs to be produced at a great deal of cost and with a long production period.
And various bug frequently appear in very large-scale scene and are difficult to test, and the setting workload of a blanking model is huge and is extremely easy to make mistakes in operation, and once a certain area in the virtual scene has a bug, the specific bug position is difficult to accurately check out.
Fig. 1 shows a schematic diagram of an automated generation scenario in the related art.
According to the research of the inventor, in another related technology, referring to fig. 1, a scene can be automatically generated by using Houdini (a three-dimensional computer graphic software), the splicing principle is that the basic topography is fluctuated and the connecting layer is mostly adopted, the scene is modified with little landscape, and the finally realized scene is still very empty. If rich picture content is desired, a modular model with multiple added is required. Because the land palace scene class checkpoint route is mostly horizontal and vertical, and the route topography area is even, the scene that the automation produced is relatively dead in most cases. The scene route lacks change, the scene inside lacks the theme design, lacks atmosphere change, can not reach the step-by-step and trade the scene effect.
Therefore, the virtual scene construction method, the virtual scene construction device, the electronic equipment and the storage medium are used for splicing the plurality of scene module plane diagrams according to the unit pixel modules acquired in advance; performing 3D (three-dimensional) operation on the scene module plan according to scene information in the scene module plan so as to construct a plurality of scene modules corresponding to a plurality of scene module plan; wherein, each scene module at least comprises: a path; determining path information corresponding to the paths in each scene module; and acquiring scene demand parameters, and splicing the scene modules corresponding to the path information according to the scene demand parameters to construct a target virtual scene. According to the method and the device, the scene module plan is obtained through unit pixel module splicing, and compared with the scene design changing in a 3D scene, the scene module plan can be changed rapidly. And the target virtual scene is divided into a plurality of scene modules to be edited respectively, so that the editing cost is reduced, and each scene module can be edited respectively by a plurality of developers, so that the editing period is shortened to a certain extent. Further, by splicing different scene modules, a complete target virtual scene is obtained, and because the scene modules spliced with each other can be freely selected in the splicing process, the generation effect of the whole target virtual scene is more diversified, different target virtual scenes can be spliced through a plurality of different scene modules in the same group, and the virtual scenes are richer and more diversified.
Fig. 2 illustrates an exemplary application scenario schematic diagram of a virtual scenario construction method provided in an embodiment of the present application.
Referring to fig. 2, in this application scenario, a local terminal device 101 and a server 102 are included. The local terminal device 101 and the server 102 may be connected through a wired or wireless communication network, so as to implement data interaction.
The local terminal device 101 may be an electronic device with data transmission, multimedia input/output functions near the user side, such as a desktop computer, a mobile phone, a mobile computer, a tablet computer, a media player, a car-mounted computer, an intelligent wearable device, a personal digital assistant (personal digital assistant, PDA) or other electronic devices capable of implementing the above functions, etc. The electronic device may include a processor for presenting a graphical user interface that may display a music game interface, and a display screen having a touch input function for processing the music game data, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
The server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like.
In some exemplary embodiments, the virtual scene construction method may be run on the local terminal device 101 or the server 102.
When the virtual scene construction method is run on the server 102, the server 102 is configured to provide a virtual scene construction service to a user of a terminal device in which a client in communication with the server 102 is installed, and the user can designate a target program through the client. The server 102 splices a plurality of scene module plane diagrams according to the pre-acquired unit pixel modules; performing 3D (three-dimensional) operation on the scene module plan according to scene information in the scene module plan so as to construct a plurality of scene modules corresponding to a plurality of scene module plan; wherein, each scene module at least comprises: a path; determining path information corresponding to the paths in each scene module; and acquiring scene demand parameters, and splicing the scene modules corresponding to the path information according to the scene demand parameters to construct a target virtual scene. The server 102 may also send the target virtual scene to a client, which exposes the target virtual scene to the user. Wherein the terminal device may be the aforementioned local terminal device 101.
When the virtual scene construction method is run on the server 102, the method can be implemented and executed based on a cloud interaction system.
The cloud interaction system comprises a client device and a cloud game server.
In some example embodiments, various cloud applications may be run under the cloud interaction system, such as: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and running of the control method of the moving state in the game are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In the above embodiments, the description has been given taking an example in which the virtual scene construction method is run on the server 102, but the present disclosure is not limited thereto, and in some exemplary embodiments, the virtual scene construction method may also be run on the local terminal device 101.
The local terminal device 101 may include a display screen and a processor. A client is installed in the local terminal apparatus 101, and a user can specify a target program through the client. The processor splices a plurality of scene module plane diagrams according to the unit pixel modules acquired in advance; performing 3D (three-dimensional) operation on the scene module plan according to scene information in the scene module plan so as to construct a plurality of scene modules corresponding to a plurality of scene module plan; wherein, each scene module at least comprises: a path; determining path information corresponding to the paths in each scene module; and acquiring scene demand parameters, and splicing the scene modules corresponding to the path information according to the scene demand parameters to construct a target virtual scene. The processor can also send the target virtual scene to the client, and the client displays the target virtual scene to the user through the display screen.
In some exemplary embodiments, taking a game as an example, the local terminal device 101 stores a game program and is used to present a game screen. The local terminal device 101 is used to interact with the player through a graphical user interface, i.e. conventionally, to download and install a game program through an electronic device and run. The manner in which the local terminal device 101 provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or it may be provided to the player by holographic projection. For example, the local terminal device 101 may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
In some exemplary embodiments, the embodiments of the present disclosure provide a virtual scene construction method, where a graphical user interface is provided through a terminal device, where the terminal device may be the aforementioned local terminal device 101 or may be a client device in the aforementioned cloud interaction system.
The virtual scene construction method according to an exemplary embodiment of the present disclosure is described below in connection with the application scene of fig. 2. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principles of the present disclosure, and the embodiments of the present disclosure are not limited in any way in this respect. Rather, embodiments of the present disclosure may be applied to any scenario where applicable.
Fig. 3 is an exemplary flowchart of a virtual scene construction method according to an embodiment of the present application.
Referring to fig. 3, the method for constructing a virtual scene provided in the embodiment of the present application specifically includes the following steps:
s302: and splicing a plurality of scene module plane diagrams according to the pre-acquired unit pixel modules.
S304: performing 3D (three-dimensional) operation on the scene module plan according to scene information in the scene module plan so as to construct a plurality of scene modules corresponding to a plurality of scene module plan; wherein, each scene module at least comprises: a path.
S306: and determining path information corresponding to the paths in each scene module.
S308: and acquiring scene demand parameters, and splicing the scene modules corresponding to the path information according to the scene demand parameters to construct a target virtual scene.
When a virtual game scene, such as an ultra-large underground city scene, is manufactured, first, a world view, a story prototype, a main character biography, an environment region, a time period, a map area, and a part of special needs in the scene need to be set. Then, in the construction work of ase:Sub>A specific virtual game scene, one of the key points in the application is to divide ase:Sub>A huge virtual scene into ase:Sub>A plurality of modules, splice the plurality of modules in ase:Sub>A mode of 'jigsaw', and further splice ase:Sub>A complete virtual scene, so that time and labor are obviously saved, the splicing mode is not unique, the construction effect of the whole virtual scene is enriched, for example, A, B, C modules are provided, splicing into ase:Sub>A straight shape, for example, A-B-C or B-A-C can be selected on the premise of meeting the splicing requirement and the splicing rule, and the virtual scene can be spliced into other shapes such as an L shape, which is the benefit brought by ase:Sub>A modularized splicing process.
Fig. 4 shows an exemplary schematic diagram of a unit pixel module in an embodiment according to the present application.
In some embodiments, the type of region that may occur in a scene may be determined based on world view, story prototype, primary character biography, environmental territories, time periods, map area, and some special needs, which may be custom set by a planner or game developer. Specifically, we can obtain in advance a unit pixel module such as in fig. 4, in which a plurality of regions distinguished by different colors, such as a birth region, a connection channel, a danger region, an escape region, a rich region, a hidden region, and a contended region, can be included. The unit pixel modules can be set to be square modules with the size of 50x50, and the unit pixel modules are convenient to splice.
Further, a virtual scene generally includes a path area for a player to walk, and a non-path area for placing environmental factors such as buildings, vegetation, and lakes, and the unit pixel module may include a first type of pixels and a second type of pixels. Wherein the first type of pixels may be represented by a plurality of unit pixel modules in a first row, for example, in fig. 4, for characterizing a path region, and the portion for characterizing a non-path region may be a second type of pixels, for example, represented by a plurality of unit pixel modules in a second row, for example, in fig. 4. The unit pixel modules shown in the third row may be obtained by disposing both in the same plan view, that is, the path region and the non-path region appear simultaneously in the plan view.
Specifically, the path area, the path position, the non-path area and the non-path position in the multiple scene module plan views can be determined according to the pre-acquired splicing requirement information, and then the first type pixels can be spliced at the path positions in the multiple initial plan views with preset sizes to obtain the path area corresponding to the path area. Further, stitching the second type of pixels at non-path positions in the plurality of initial plan views to obtain a non-path region corresponding to the non-path area, and determining a plurality of scene module plan views according to the path region and the non-path region.
The splicing requirement information may be used to indicate a path area, a path position, a non-path area, and a non-path position, for example, may be a plane image, splice each module in fig. 4 in the image, splice the modules corresponding to the path together to determine the path position, determine the path area according to the module area corresponding to the path, splice the modules corresponding to the non-path together to determine the non-path position, and determine the non-path area according to the module area corresponding to the non-path. In this way, the first type of pixels can be further spliced at the path position in the initial plan view with a preset size, for example 1000x1000, so as to obtain the path region, and the second type of pixels can be spliced at the non-path position, so as to obtain the non-path region, and further, the scene module plan view is obtained according to the path region and the non-path region.
Fig. 5 shows an exemplary schematic of a scene module plan in an embodiment according to the application.
That is, the stitching requirement may be used to characterize the required path region and non-path region in one or more scene module plan views, so that the path area, path position, non-path area, non-path position, etc. information in the scene module plan view may be determined. Further, in the initial plan views of the preset size, for example, in the initial plan view of 50×50 as shown in fig. 5, the positions corresponding to the "walkable region", "stair down", and "down-to-next-floor walkable region" in the initial plan views can be determined according to the path positions, so that the pixels of the first type are spliced to the path positions until the path regions corresponding to the path areas are obtained. Similarly, for the non-path area, the position corresponding to the non-path area may be determined according to the non-path position, for example, the position corresponding to the intersection for blocking the non-path and the non-walkable area may be included, and then the second type of pixels are spliced to the non-path position until the non-path area corresponding to the non-path area is obtained. Still further, a scene module plan may be determined based on the path region and the non-path region.
Fig. 6 shows an exemplary schematic diagram of a refined scene module plan in accordance with an embodiment of the application.
After the scene module plan is obtained, the scene module plan can be subjected to artistic processing in order to enable a game developer to construct a specific virtual scene through the scene module plan more intuitively. Drawing art processing can be performed on the scene module plan shown in fig. 5, for example, by using drawing software without changing the form and layout, so as to obtain a thinned scene module plan shown in fig. 6, for example.
In yet another implementation, a preset map may be obtained in advance, with which to give a plan view of the scene module as shown in fig. 5. For example, a path area may be given a first preset map obtained in advance, so as to obtain a path plan view, and a non-path area may be given a second preset map obtained in advance, so as to obtain a non-path plan view, and further, a refined scene module plan view, for example, as shown in fig. 6, may be determined according to the path plan view and the non-path plan view.
Fig. 7 (a) shows an exemplary schematic diagram of a detailed scene module plan containing the same number of intersections in an embodiment according to the present application.
Fig. 7 (b) shows an exemplary schematic diagram of another refined scene module plan containing the same number of intersections in an embodiment according to the present application.
Fig. 7 (c) shows an exemplary schematic diagram of still another refined scene module plan containing the same number of intersections in an embodiment according to the present application.
For a scene module plan, when splicing scene modules constructed from a plurality of different scene module plan, one of the main factors to be considered in practice is the interface of the path area on that side for splicing, similar to the shape of the splice location of the "puzzle", for example, the "male" can be spliced with the "female". In other words, if the positions and the number of intersections of the path regions are the same in the scene module plan, different path directions and contents of non-path regions can be set in the scene module plan to obtain different scene module plan, so that a plurality of scene module plan can be produced more quickly and conveniently. Referring to fig. 7 (a) - (c), there are shown plan views of three scene modules with different interior path trends and different contents of non-path regions, with the same intersection location and number.
After obtaining the scene module plan, a scene module for splicing the target virtual scene needs to be obtained, and an important step is to execute 3D operation on the scene module plan according to the scene information in the scene module plan so as to construct a plurality of scene modules corresponding to the plurality of scene module plans.
In some embodiments, the path plan view and the non-path plan view in each scene module plan view may be equally divided into a plurality of plane grids, and scene information corresponding to each plane grid may be determined, where the scene information may include at least: RGB values corresponding to the planar grid. Further, a height value for each planar mesh at a corresponding location in the scene module may be determined from the RGB values for each planar mesh, where the RGB values may be proportional or inversely proportional to the height values. For example, the sum of the square value of the R value, the square value of the G value, and the square value of the B value may be determined, and the sum may be root to determine a height value, that is, two different relationships may be represented between the RGB value and the height value, for example, one is that the larger the RGB value is, the larger the height value is, the overall trend approximates a positive proportional relationship, and the second is that the larger the RGB value is, the smaller the height value is, the overall trend approximates an inverse proportional relationship, and is not specifically limited herein.
Still further, according to the height value of the corresponding position of each planar grid in each scene module plan in the scene module, 3D transformation operation can be performed on each planar grid in each scene module plan, so as to construct a plurality of scene modules containing paths corresponding to the path plan.
Specifically, the 100x100 scene module plan can be divided into a plurality of 5x5 plane grids, so that the artistic effect in the scene module plan can be more flexibly represented, and the precision of the scene module plan can be better controlled. After obtaining the scene module plan, if the scene module obtained after the 3D operation is to be determined according to the information in the scene module plan, the height value of each position after the 3D operation needs to be determined to obtain a three-dimensional scene module. Then, the height of the position corresponding to the scene module may be determined according to the RGB value corresponding to the planar grid, for example, the RGB value corresponding to the planar grid may be proportional to the height value of the region corresponding to the planar grid in the scene module, that is, the larger the RGB value corresponding to the planar grid, the larger the height value of the region corresponding to the planar grid in the scene module. Of course, the RGB value corresponding to the planar grid may be inversely proportional to the height value of the area corresponding to the planar grid in the scene module, that is, the larger the RGB value corresponding to the planar grid is, the smaller the height value of the area corresponding to the planar grid in the scene module is.
Whether the RGB values and the height values are in a direct proportion relation or an inverse proportion relation, the height value of the corresponding position of each planar grid in the scene module can be determined according to the RGB values of the planar grids in the scene module plan, and then the planar grids can be subjected to 3D processing in 3D software, so that the scene module corresponding to the scene module plan is constructed. Of course, the height may be limited, e.g., the maximum height cannot exceed 20 units, which 20 units are defined by the bottom surface of the scene module along a vertical axis in the three-dimensional coordinate system, e.g., the center point coordinates of the bottom surface of the scene module may be (0, 0).
In some embodiments, an initial height value corresponding to the minimum RGB value may be set, for example, the initial height corresponding to the RGB value (0, 0) may be 0 units, and further, a difference between the RGB value corresponding to each planar grid and the minimum RGB value may be determined. If the difference between the RGB value corresponding to the planar grid and the minimum RGB value reaches a preset multiple of the preset interval RGB value, for example, one of the planar grids corresponds to RGB value (128, 128, 128), the preset interval RGB value is (64, 64, 64), and the preset multiple is 1, the height value of the corresponding position of the grid in the scene module may be increased by a preset height value, that is, if the initial height value of the corresponding position is 0 in the plane, the difference between the RGB value corresponding to the current planar grid and the minimum RGB value reaches 2 times the preset interval RGB value, for example, the preset height value may be increased by 2 preset height values, for example, 5 units, and in fact, the height value of the corresponding position of the planar grid in the scene module is changed from 0 to 10 units.
In other embodiments, an initial height value corresponding to the maximum RGB value may be set, for example, the initial height corresponding to the RGB value (256, 256, 256) may be 0 units, and further, a difference between the RGB value corresponding to each planar grid and the maximum RGB value may be determined. If the difference between the RGB value corresponding to the plane grid and the maximum RGB value reaches a preset multiple of the preset interval RGB value, for example, one of the plane grids corresponds to RGB value (128, 128, 128), the preset interval RGB value is (64, 64, 64), and the preset multiple is 1, the height value of the corresponding position of the grid in the scene module may be increased by a preset height value, that is, if the initial height value of the corresponding position is 0 in the plane, the difference between the RGB value corresponding to the current plane grid and the maximum RGB value reaches 2 times of the preset interval RGB value, 2 preset height values may be increased, for example, the preset height value is 5 units, and in fact, the height value of the corresponding position of the plane grid in the scene module is changed from 0 to 10 units.
In some embodiments, the center position of the scene module plan may also be determined, and the corresponding position of the planar grid in the scene module, which is more than a preset distance from the center position, is determined as a boundary, for example, the overall scene module size is 1000x1000, and then the position with a distance of 500 from the center position is defined as the boundary. Further, the corresponding position of the planar mesh in the path plan in the scene module may be determined as a path, i.e. the planar mesh in the path plan corresponds to the path. The RGB value of the plane grid corresponding to the first area containing the path in the boundary is set as the first RGB value, and the RGB value of the plane grid corresponding to the second area not containing the path in the boundary is set as the second RGB value, namely the path area and the non-path area are represented by different RGB values, and the path area and the non-path area are distinguished. Still further, a first height value corresponding to the first area may be determined according to a first RGB value of the planar grid corresponding to the first area, and a second height value corresponding to the second area may be determined according to a second RGB value of the planar grid corresponding to the second area. And the height value of the first area containing the path and the second area not containing the path area can be determined directly according to the first RGB value and the second RGB value.
Fig. 8 shows an exemplary schematic diagram of a white-box scene module in an embodiment in accordance with the application.
In some embodiments, for 3D rendering of the scene module plan, a white-box model block of a unit volume may be sequentially placed at each corresponding position of the planar grid in each scene module plan according to the height value of each corresponding position of the planar grid in each scene module plan in the scene module plan, so as to construct a white-box scene module, for example, referring to fig. 8, a white-box scene module corresponding to the scene module plan shown in fig. 6 is shown.
The 3D operation is to make the 2D plan as a 3D model, for example, a white-box scene module is obtained according to the scene module plan, where the white-box scene module is a 3D model. For example, it can be implemented in 3D software by means of a simple patch and a simple white-box model block (for example, made in the form of cubes of 5m or 10m or 1 m). And then in an engine (such as a UE4 engine), the wall model and the ground surface sheet are subjected to 3D route implementation around the route according to a route map. The heights of the walls are regular, for example, the heights of the walls can be 5 meters and 10 meters, and the walls can be spliced up and down according to the topography requirement to form the walls with different heights, which also accords with the theory of building floors.
The software for performing 3D rendering operations, which may be, for example, a 3D Maker, may convert a 2D planar image into a 3D model, be compatible with JPG, BMP, PNG, GIF, etc. in various picture formats, support various 3D effect selections, such as spherical, cylindrical, conical, etc. High-precision 3D models can also be generated by inputting depth images and texture images using, for example, depthmap x software. DepthmapX supports a variety of depth image input modes including triangulation, laser scanning, stereography, and the like. For another example, blender software may be used to support not only 3D modeling, animation, etc., but also the conversion of 2D planar images into 3D models. Blender provides a variety of 3D conversion tools, and a user can select different tools according to own needs.
FIG. 9 illustrates an exemplary schematic diagram of a virtual asset in an embodiment in accordance with the application.
Fig. 10 shows an exemplary schematic of a scene module in an embodiment according to the application.
Further, pre-acquired virtual assets, such as the various types of virtual assets shown in fig. 9, may be bound at preset locations in the white-box scene module, and the white-box scene module may be beautified by using the virtual assets to construct the scene module shown in fig. 10.
Still further, light, path finding and blocking can be made, and finally all objects, light, path finding and blocking binding are grouped together by using a function of preserving model groups of Neox2 software, so as to obtain a scene module which is more close to the setting of the target virtual scene.
It should be noted that, in practice, the height values of different regions corresponding to the 3D scene module may also be marked in the scene module plan, for example, a first height parameter indicating the height value of the path region corresponding to the position in the scene module in the path plan may be read, and a second height parameter indicating the height value of the non-path region corresponding to the position in the scene module in the non-path plan may be read, that is, the height value corresponding to each grid is marked in advance in the path plan and the non-path plan. Further, a 3D operation may be performed on the path plan view according to the first height parameter, and a 3D operation may be performed on the non-path plan view according to the second height parameter, to construct a plurality of scene modules corresponding to the plurality of scene module plan views. In addition, light rays, fog effect, flame, smoke and height can be designed in the 3D engine, and a simple plane checkpoint is perfectly 3D.
Regardless of the manner in which the 3D scene module is obtained, some rules need to be complied with to ensure the rationality of the scene module, for example, it may be determined whether the angle between the plane on which the path is located and the bottom surface of the scene module exceeds a preset angle, for example, the preset angle may be set to 45 °. If the included angle between the plane where the path is located and the bottom surface of the scene module exceeds the preset angle, the height value corresponding to the path can be adjusted so that the included angle between the plane where the path is located and the bottom surface of the scene module is smaller than or equal to the preset angle, and the RGB value of the plane grid corresponding to the path in the scene module plan view is redetermined according to the adjusted height value corresponding to the path. That is, it may be determined whether the gradient of the path in the scene module is too large, for example, the preset angle may be 45 °, and once the gradient of the path exceeds 45 °, the scene module may look abrupt, and in practical applications, the player may feel that the virtual character walks in the target virtual scene unnatural when playing in the target virtual scene. Further, the height value of the path with the gradient exceeding 45 ° is adjusted, and the path is further smoothed until the gradient is less than or equal to 45 °, and the RGB value of the planar grid corresponding to the path with the adjusted height value is required to be adjusted accordingly in the scene module plan view.
FIG. 11 illustrates an exemplary schematic diagram of a stitched, refined scene module plan in accordance with embodiments of the application.
When splicing is carried out subsequently, for any one of the scene modules, an intersection used for communicating with paths in other scene modules can be determined according to the extending direction of the paths in the scene module, and intersection position information, intersection quantity information, intersection size and intersection opening direction corresponding to the intersection are determined. Further, path information corresponding to the path in each scene module can be determined according to intersection position information, intersection quantity information, intersection size and intersection opening direction corresponding to the intersection in each scene module. For example, one scene module includes 1 intersection in one of the sides a, the side B includes 2 intersections, the side C includes 0 intersections, and the side D includes 1 intersection, and then intersection position information may be represented as different sides including intersections. The intersection quantity information is 1, 2, 0 and 1. The size of the intersection can be determined according to the distance between two sides of the intersection in the scene module, and the opening direction of the intersection is the direction corresponding to each side, namely the direction of the intersection. Referring to fig. 11, it can be seen that when all scene modules are spliced, a plan view is used to show the spliced scene, so that a plurality of paths which are mutually communicated can be obtained.
Fig. 12 shows an exemplary schematic of a tiled white-boxed scene module in accordance with embodiments of the present application.
Specifically, the required number of the scene modules can be determined according to the scene requirement parameters, and for any one of the scene modules, the intersection number information and the intersection size of any one side of the scene module can be the same, and other scene modules corresponding to the intersection position information and the intersection opening direction of the scene module are spliced with the scene module, so that the included angle of the planes of the two paths of the splicing position between the scene modules is in a preset angle interval, and the paths are mutually communicated. Further, all paths in the scene modules can be connected by splicing the scene modules, and the number of the scene modules subjected to splicing reaches the required number of the scene modules so as to construct and obtain the target virtual scene. Referring to fig. 12, the white-box scene module of the spliced target virtual scene before decoration is set corresponds to the plan view in fig. 11, that is, each path is mutually communicated.
In some embodiments, it may be determined whether there are candidate scene modules in which the number of intersections information and the size of the intersections on the first side are the same as the number of intersections information and the size of the intersections on the second side in the scene modules in other scene modules, where the number of intersections information on each side is greater than the preset number. If the number of intersections is larger than the preset number, candidate scene modules with the same number of intersections and the same intersection size as the number of intersections and the same size of intersections on the second side in the scene modules exist in other scene modules, whether the candidate scene modules have target scene modules with the directions of openings of the intersections on the first side opposite to the directions of openings of the intersections on the second side in the scene modules and the intersection position information on the first side overlapped with the intersection position on the second side in the scene modules is determined. If a target scene module exists in the candidate scene module, wherein the direction of the intersection opening of the first side is opposite to the direction of the intersection opening of the second side in the scene module, and the intersection position information of the first side is overlapped with the intersection position of the second side in the scene module, the second side of the scene module and the first side of the target scene module can be spliced with each other to obtain a path communicated with each other between the scene modules.
For example, the scene module 1 has an intersection M with an intersection size of 10 toward the first direction (e.g., the left direction), and the intersection M is located at the very center of the edge a, that is, the intersection M is located at the same distance from both ends of the edge a. The scene module 2 has an intersection N with an intersection size of 10 toward the second direction, and the intersection N is located at the very center of the edge B, that is, the intersection N is the same distance from both ends of the edge B, and the second direction is opposite to the first direction, for example, the right direction. The number information of the intersections in the side a of the scene module 1 and the side B of the scene module 2 are the same as the size of the intersections, and the intersection position information of the intersections M and N are opposite to the opening direction of the intersections, so that the side a of the scene module 1 and the side B of the scene module 2 can be spliced, and the paths which are communicated with each other and are in the preset angle interval are obtained by the included angle of the planes of the two paths of the splicing position between the scene module 1 and the scene module 2.
In other embodiments, if the intersection number information of the first side and the intersection number information of the second side are zero, the second side of the scene module is stitched to the first side of any one of the candidate scene modules. That is, if two of the scene modules respectively have a number of intersections of 0 in one side, that is, no intersections are included, the two sides of the two scene modules can be spliced.
If there is no target scene module in the candidate scene module in which the direction of the intersection opening of the first side is opposite to the direction of the intersection opening of the second side in the scene module and/or the intersection position information of the first side overlaps with the intersection position of the second side in the scene module, determining whether there is a target scene module in the candidate scene module in which the direction of the intersection opening of the first side is opposite to the direction of the intersection opening of the second side in the scene module after the transformation operation, and the intersection position information of the first side overlaps with the intersection position of the second side in the scene module, wherein the transformation operation at least includes: and rotating operation with a central axis which is perpendicular to the bottom surface of the scene module and passes through the center point of the scene module as a rotating axis. If a target scene module, in which the intersection opening direction of the first side is opposite to the intersection opening direction of the second side in the scene module after the transformation operation, exists in the candidate scene module, and the intersection position information of the first side is overlapped with the intersection position of the second side in the scene module, the second side of the scene module and the first side of the target scene module are spliced with each other, so that a path which is communicated with each other between the scene modules is obtained.
For example, the scene module includes four sides in the up-down, left-right directions, and then the upper side of the scene module 1 has an intersection K, and the intersection K is located at the very center of the upper side, and the right side of the scene module 2 has an intersection L, and the intersection L is located at the very center of the right side. Then, at this time, the scene module 2 is rotated by 90 ° counterclockwise about the central axis perpendicular to the bottom surface of the scene module 2 and the central point of the scene module 2, and then the direction of the intersection L is changed from the right direction to the downward direction, and at this time, after the edges corresponding to the intersection K and the intersection L are spliced, the scene module 1 and the scene module 2 can be spliced.
In some embodiments, the heights of the paths may not be uniform in the same scene module, i.e., the paths in one scene module have a gradient, which, of course, also needs to be less than or equal to a predetermined angle, such as 45 °. Then, when two scene modules meeting the splicing conditions in the foregoing embodiments are spliced, it is also necessary to consider whether the extension of the spliced path meets a certain requirement. Specifically, when the gradient exists in the paths of the intersections where the two scene modules are spliced, the extending directions of the planes of the paths of the two adjacent module intersections and the ground are complementary, so that the smoothness and smoothness of the paths can be ensured after the two paths are spliced.
It should be noted that if there is an angle between the paths that need to be spliced by the two scene modules, for example, both paths are ascending or both paths are descending, a V-shaped path is presented after the splicing, or an inverted V-shaped path is presented, and as long as the gradient is less than or equal to a preset angle, for example, 45 °, the paths are considered to be capable of being spliced.
In some embodiments, for the overall scene stitching, the technician may have special requirements, such as a wide room in the scene after several adjacent scene modules are stitched, and then it is necessary to ensure that the maximum width of the road in the corresponding module reaches a predetermined width, such as the narrowest of the road is no less than 15, and the widest is no more than 90. And the road area reaches the preset area, the minimum is not less than 15x15, and the maximum is not more than 90x90.
In the whole scene obtained by splicing, the utilization rate of the module can be determined according to the number of intersections of one module, for example, the multiplexing rate is high if the number of intersections is large. The usage rate is low when the number of intersections is reduced. The maximum number of the intersections is not more than 8, the number of the intersections is not less than 1, and the modules with the reduced number of the intersections are regarded as custom modules. And determining the weight of the module according to the utilization rate, wherein the more the number of the intersections is, the higher the weight is, and the less the number of the intersections is, the lower the weight is. The general module must guarantee to be 8 crossing, and the module quantity ratio of general module must be high just can let scene circuit show abundantly. At the same time, a certain amount of custom modules with few intersections must be present to make the route more interesting.
After the target virtual scene is constructed, the target virtual scene can be tested to ensure the rationality of the path of the target virtual scene so as to avoid program loopholes when the target virtual scene is formally put into use. For example, a path-finding track distributed along the path may be generated according to the path in the target virtual scene, and further, it may be determined whether there are at least two path-finding tracks that are not communicated with each other. If at least two non-communicated path-finding tracks exist, replacing a scene module corresponding to the path-finding track with shorter length until at least two non-communicated path-finding tracks do not exist in the target virtual scene. And further, each path of the finally obtained target virtual scene is communicated, and the phenomenon that the non-communicated path cannot be used is avoided.
Fig. 13 shows an exemplary schematic diagram of a spliced portion of a target virtual scene in accordance with an embodiment of the present application.
Referring to fig. 13, it can be seen that after stitching and after finishing the beautification and decoration, the paths presented are communicated and that inside the target virtual scene, the paths are sloped, for example, the stepped portions are presented at an upward slope angle.
FIG. 14 illustrates an exemplary schematic of an operational interface of running test software in accordance with an embodiment of the present application.
The running test software can also be used for carrying out actual running test on the target virtual scene, and in particular, a plurality of collision bodies can be arranged on the path-finding track, wherein the collision bodies can be used for bearing the movable virtual object so that the movable virtual object can move along any direction on the path-finding track. Further, a movable test virtual character can be created in the target virtual scene, and the test task is configured to the test virtual character, so that the test virtual character can move along the extending direction of the path-finding track according to the test task. When the test virtual character can traverse the path-finding track, a movable virtual object is created at the corresponding position of the collision body according to the virtual asset acquired in advance. That is, referring to fig. 14, a test task may be configured to a test virtual character, for example, including a character path corresponding to a track to be run by the test virtual character, and a corresponding action may be configured to the character, and during the running test, the running test process may be presented in real time using a display device, so that parameters of a scene window and corresponding parameters of a virtual camera in the scene, for example, parameters of a rotation angle, an inclination angle, a distance between the camera and the test virtual character, and the like, may be configured.
Further, virtual objects, such as interactable NPC roles, may be added to the target virtual scene, so as to increase the playability of the target virtual scene, for example, an area of a path corresponding to the track in the scene module may be determined, and further, a distribution density of the movable virtual objects created at the position corresponding to the collision body in each scene module may be determined according to the area of the path corresponding to the track in the scene module. For example, when the area of the corresponding path in the partial scene module is larger, a movable virtual object with a larger distribution density, such as NPC, may be configured in the path, for example, the path may correspond to a mart in a virtual scene, where a larger number of NPCs should be included.
From the above, it can be seen that, according to the virtual scene construction method, device, electronic device and storage medium provided by the present application, a plurality of scene module plan views are obtained by splicing the unit pixel modules obtained in advance; performing 3D (three-dimensional) operation on the scene module plan according to scene information in the scene module plan so as to construct a plurality of scene modules corresponding to a plurality of scene module plan; wherein, each scene module at least comprises: a path; determining path information corresponding to the paths in each scene module; and acquiring scene demand parameters, and splicing the scene modules corresponding to the path information according to the scene demand parameters to construct a target virtual scene. According to the method and the device, the scene module plan is obtained through unit pixel module splicing, and compared with the scene design changing in a 3D scene, the scene module plan can be changed rapidly. And the target virtual scene is divided into a plurality of scene modules to be edited respectively, so that the editing cost is reduced, and each scene module can be edited respectively by a plurality of developers, so that the editing period is shortened to a certain extent. Further, by splicing different scene modules, a complete target virtual scene is obtained, and because the scene modules spliced with each other can be freely selected in the splicing process, the generation effect of the whole target virtual scene is more diversified, different target virtual scenes can be spliced through a plurality of different scene modules in the same group, and the virtual scenes are richer and more diversified.
It should be noted that, the method of the embodiments of the present application may be performed by a single device, for example, a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the methods of embodiments of the present application, and the devices may interact with each other to complete the methods.
It should be noted that some embodiments of the present application are described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Fig. 15 shows an exemplary structural schematic diagram of a virtual scene building apparatus provided in an embodiment of the present application.
Based on the same inventive concept, the application also provides a virtual scene construction device corresponding to the method of any embodiment.
Referring to fig. 15, the virtual scene constructing apparatus includes: the system comprises a splicing module, a 3D (three-dimensional) operation module, a determining module and a constructing module; wherein,
the splicing module is configured to splice a plurality of scene module plane diagrams according to the pre-acquired unit pixel modules;
a 3D-processing module configured to perform 3D-processing on the scene module plan according to scene information in the scene module plan to construct a plurality of scene modules corresponding to a plurality of the scene module plan; wherein, each scene module at least comprises: a path;
the determining module is configured to determine path information corresponding to the paths in each scene module;
the construction module is configured to acquire scene demand parameters and splice the scene modules corresponding to the path information according to the scene demand parameters so as to construct a target virtual scene.
In one possible implementation manner, the unit pixel module includes: a first type pixel and a second type pixel;
The splice module is further configured to:
determining path areas, path positions, non-path areas and non-path positions in the scene module plan according to the pre-acquired splicing requirement information;
splicing the first type pixels at the path positions in a plurality of initial plan views with preset sizes to obtain a path area corresponding to the path area;
splicing the second type of pixels at the non-path positions in the initial plan views to obtain a non-path region corresponding to the non-path area;
and determining the plurality of scene module plan views according to the path region and the non-path region.
In one possible implementation, the stitching module is further configured to:
giving the path area according to a first pre-obtained pre-set map to obtain a path plan;
according to a second preset mapping obtained in advance, giving the non-path area so as to obtain a non-path plan;
and determining the plurality of scene module plan views according to the path plan view and the non-path plan view.
In one possible implementation, the 3D manipulation module is further configured to:
Equally dividing the path plane graph and the non-path plane graph in each scene module plane graph into a plurality of plane grids, and determining the scene information corresponding to each plane grid; wherein, the scene information at least comprises: RGB values corresponding to the planar grid;
determining the height value of the corresponding position of each planar grid in the scene module according to the RGB value corresponding to each planar grid;
and according to the height value of the corresponding position of each plane grid in each scene module plane graph in the scene module, performing 3D (three-dimensional) operation on each plane grid in each scene module plane graph to construct a plurality of scene modules containing paths corresponding to the path plane graph.
In one possible implementation, the 3D manipulation module is further configured to:
setting an initial height value corresponding to the minimum RGB value, and determining a difference value between the RGB value corresponding to each plane grid and the minimum RGB value;
and increasing the height value of the corresponding position of each planar grid in the scene module by a preset height value in response to the difference between the RGB value corresponding to each planar grid and the minimum RGB value reaching a preset multiple of the preset interval RGB value.
In one possible implementation, the 3D manipulation module is further configured to:
setting an initial height value corresponding to the maximum RGB value, and determining a difference value between the RGB value corresponding to each planar grid and the maximum RGB value;
and increasing the height value of the corresponding position of each planar grid in the scene module by a preset height value in response to the difference between the RGB value corresponding to each planar grid and the maximum RGB value reaching a preset multiple of the preset interval RGB value.
In one possible implementation, the 3D manipulation module is further configured to:
determining the central position of the plane graph of the scene module, and determining the corresponding position of a plane grid, which is more than a preset distance from the central position, in the scene module as a boundary;
determining the corresponding position of the plane grid in the path plan in the scene module as a path;
setting the RGB value of the plane grid corresponding to the first area containing the path in the boundary as a first RGB value, and setting the RGB value of the plane grid corresponding to the second area not containing the path in the boundary as a second RGB value;
and determining a first height value corresponding to the first region according to a first RGB value of the plane grid corresponding to the first region, and determining a second height value corresponding to the second region according to a second RGB value of the plane grid corresponding to the second region.
In one possible implementation, the 3D manipulation module is further configured to:
according to the height value of the corresponding position of each plane grid in each scene module plan in the scene module, placing a white box model block in unit volume at the corresponding position of each plane grid in each scene module plan in sequence to construct and obtain a white box scene module;
binding the pre-acquired virtual assets at preset positions in the white-box scene module to construct the scene module.
In one possible implementation, the 3D manipulation module is further configured to:
reading a first height parameter in the path plan view, which characterizes a height value of a corresponding position of the path region in the scene module, and reading a second height parameter in the non-path plan view, which characterizes a height value of a corresponding position of the non-path region in the scene module;
and executing 3D operation on the path plane graph according to the first height parameter, and executing 3D operation on the non-path plane graph according to the second height parameter so as to construct a plurality of scene modules corresponding to a plurality of scene module plane graphs.
In one possible implementation, the 3D manipulation module is further configured to:
determining whether an included angle between a plane where the path is located and the bottom surface of the scene module exceeds a preset angle;
and in response to the included angle between the plane where the path is located and the bottom surface of the scene module exceeding the preset angle, adjusting the height value corresponding to the path so that the included angle between the plane where the path is located and the bottom surface of the scene module is smaller than or equal to the preset angle, and re-determining the RGB value of the planar grid corresponding to the path in the scene module plan according to the adjusted height value corresponding to the path.
In one possible implementation, the determining module is further configured to:
for any of the plurality of scene modules,
determining intersections used for communicating with paths in other scene modules according to the extending directions of the paths in the scene modules, and determining intersection position information, intersection quantity information, intersection size and intersection opening directions corresponding to the intersections;
and determining path information corresponding to the path in each scene module according to the intersection position information, the intersection quantity information, the intersection size and the intersection opening direction corresponding to the intersection in each scene module.
In one possible implementation, the building module is further configured to:
determining the required quantity of the scene modules according to the scene requirement parameters;
for any of the plurality of scene modules,
splicing other scene modules which are the same as the intersection quantity information and the intersection size on any side of the scene module and correspond to the intersection position information and the intersection opening direction of the scene module with the scene module to obtain paths which are in a preset angle interval and are communicated with each other, wherein the included angle of planes of two paths of a splicing position between the scene modules is in the preset angle interval;
and splicing the scene modules to communicate all paths in the scene modules, wherein the number of the scene modules subjected to splicing reaches the required number of the scene modules so as to construct and obtain the target virtual scene.
In one possible implementation, the building module is further configured to:
determining whether candidate scene modules with the intersection quantity information of the first side and the intersection size the same as those of the second side exist in the other scene modules with the intersection quantity information of each side larger than the preset quantity;
Determining whether a target scene module is present in the candidate scene module, wherein the intersection opening direction of the first side is opposite to the intersection opening direction of the second side in the scene module, and the intersection position information of the first side is overlapped with the intersection position of the second side in the scene module, in response to the intersection number information of the first side and the candidate scene module, the intersection size of which is the same as the intersection number information of the second side in the scene module, being present in the other scene modules, the intersection number information of which is greater than the preset number;
and in response to a target scene module, in which the intersection opening direction of the first side is opposite to the intersection opening direction of the second side in the scene module, and the intersection position information of the first side overlaps with the intersection position of the second side in the scene module, splicing the second side of the scene module and the first side of the target scene module to obtain a mutually communicated path between the scene modules.
In one possible implementation, the building module is further configured to:
and in response to zero crossing number information of the first side and zero crossing number information of the second side, mutually splicing the second side of the scene module and the first side of any one of the candidate scene modules to obtain mutually communicated paths between the scene modules.
In one possible implementation, the building module is further configured to:
determining whether a target scene module is present in the candidate scene module, wherein the intersection opening direction of the first side is opposite to the intersection opening direction of the second side in the scene module, and/or the intersection position information of the first side is overlapped with the intersection position of the second side in the scene module; wherein the transforming operation at least comprises: rotating operation by taking a central shaft which passes through the central point of the scene module and is mutually perpendicular to the bottom surface of the scene module as a rotating shaft;
and in response to the existence of a target scene module in the candidate scene module, wherein the direction of an intersection opening of a first side is opposite to the direction of the intersection opening of a second side in the scene module after transformation operation, and the intersection position information of the first side is overlapped with the intersection position of the second side in the scene module, the second side of the scene module and the first side of the target scene module are spliced with each other to obtain a path which is communicated with each other between the scene modules.
In one possible implementation manner, the apparatus further includes: a test module;
the test module is configured to:
generating path finding tracks distributed along the paths according to the paths in the target virtual scene;
determining whether at least two mutually non-communicated path-finding tracks exist;
and in response to the existence of at least two non-communicated path-finding tracks, replacing a scene module corresponding to the path-finding track with shorter length until at least two non-communicated path-finding tracks do not exist in the target virtual scene.
In one possible implementation, the test module is further configured to:
a plurality of collision bodies are arranged on the path-finding track; the collision body is used for bearing a movable virtual object so that the movable virtual object can move along any direction on the path-finding track;
creating a movable test virtual role in the target virtual scene, and configuring a test task to the test virtual role so that the test virtual role moves along the extending direction of the path-finding track according to the test task;
and responding to the test virtual character to traverse the path-finding track, and creating the movable virtual object at the corresponding position of the collision body according to the pre-acquired virtual asset.
In one possible implementation manner, the apparatus further includes: creating a module;
the creation module is configured to:
determining the area of a path corresponding to the track in the scene module;
and determining the distribution density of the movable virtual object created at the corresponding position of the collision body in each scene module according to the area of the path corresponding to the track in the scene module.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
The device of the foregoing embodiment is configured to implement the corresponding virtual scene construction method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Fig. 16 shows an exemplary structural schematic diagram of an electronic device according to an embodiment of the present application.
Based on the same inventive concept, the application also provides an electronic device corresponding to the method of any embodiment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the method for constructing the virtual scene according to any embodiment when executing the program. Fig. 16 shows a more specific hardware architecture of an electronic device according to this embodiment, where the device may include: processor 1610, memory 1620, input/output interface 1630, communication interface 1640, and bus 1650. Wherein processor 1610, memory 1620, input/output interface 1630 and communication interface 1640 enable communication connection among each other within the device via bus 1650.
The processor 1610 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1620 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. The memory 1620 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present specification are implemented by software or firmware, relevant program codes are stored in the memory 1620 and invoked by the processor 1610 for execution.
The input/output interface 1630 is used for connecting with an input/output module to realize information input and output. The input/output module may be configured as a component in a device (not shown in the figure) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1640 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1650 includes a path for transmitting information between components of the device (e.g., processor 1610, memory 1620, input/output interface 1630, and communication interface 1640).
It should be noted that although the above devices only show processor 1610, memory 1620, input/output interface 1630, communication interface 1640, and bus 1650, in an implementation, the device may include other components necessary to achieve proper operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The electronic device of the foregoing embodiment is configured to implement the corresponding virtual scene construction method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
The memory 1620 stores machine readable instructions executable by the processor 1610, which when the electronic device is operating, communicate between the processor 1610 and the memory 1620 over a bus 1630, such that the processor 1610 performs the following instructions when operating:
splicing a plurality of scene module plane diagrams according to a pre-acquired unit pixel module;
performing 3D (three-dimensional) operation on the scene module plan according to scene information in the scene module plan so as to construct a plurality of scene modules corresponding to a plurality of scene module plan; wherein, each scene module at least comprises: a path;
determining path information corresponding to the paths in each scene module; the method comprises the steps of,
and acquiring scene demand parameters, and splicing the scene modules corresponding to the path information according to the scene demand parameters to construct a target virtual scene.
In one possible embodiment, the unit pixel module includes: a first type pixel and a second type pixel;
in the instructions executed by the processor 1610, the splicing, according to the pre-acquired unit pixel modules, to obtain a plurality of scene module plan views includes:
determining path areas, path positions, non-path areas and non-path positions in the scene module plan according to the pre-acquired splicing requirement information;
Splicing the first type pixels at the path positions in a plurality of initial plan views with preset sizes to obtain a path area corresponding to the path area;
splicing the second type of pixels at the non-path positions in the initial plan views to obtain a non-path region corresponding to the non-path area;
and determining the plurality of scene module plan views according to the path region and the non-path region.
In a possible implementation manner, in the instructions executed by the processor 1610, the determining the plurality of scene module plan views according to the path area and the non-path area includes:
giving the path area according to a first pre-obtained pre-set map to obtain a path plan;
according to a second preset mapping obtained in advance, giving the non-path area so as to obtain a non-path plan;
and determining the plurality of scene module plan views according to the path plan view and the non-path plan view.
In a possible implementation manner, in the instructions executed by the processor 1610, the performing a 3D operation on the scene module plan according to the scene information in the scene module plan to construct a plurality of scene modules corresponding to a plurality of scene module plans includes:
Equally dividing the path plane graph and the non-path plane graph in each scene module plane graph into a plurality of plane grids, and determining the scene information corresponding to each plane grid; wherein, the scene information at least comprises: RGB values corresponding to the planar grid;
determining the height value of the corresponding position of each planar grid in the scene module according to the RGB value corresponding to each planar grid;
and according to the height value of the corresponding position of each plane grid in each scene module plane graph in the scene module, performing 3D (three-dimensional) operation on each plane grid in each scene module plane graph to construct a plurality of scene modules containing paths corresponding to the path plane graph.
In a possible implementation manner, in the instructions executed by the processor 1610, the determining, according to the RGB value corresponding to each planar grid, the height value of the corresponding position of each planar grid in the scene module includes:
setting an initial height value corresponding to the minimum RGB value, and determining a difference value between the RGB value corresponding to each plane grid and the minimum RGB value;
and increasing the height value of the corresponding position of each planar grid in the scene module by a preset height value in response to the difference between the RGB value corresponding to each planar grid and the minimum RGB value reaching a preset multiple of the preset interval RGB value.
In a possible implementation manner, in the instructions executed by the processor 1610, the determining, according to the RGB value corresponding to each planar grid, the height value of the corresponding position of each planar grid in the scene module includes:
setting an initial height value corresponding to the maximum RGB value, and determining a difference value between the RGB value corresponding to each planar grid and the maximum RGB value;
and increasing the height value of the corresponding position of each planar grid in the scene module by a preset height value in response to the difference between the RGB value corresponding to each planar grid and the maximum RGB value reaching a preset multiple of the preset interval RGB value.
In a possible implementation manner, in the instructions executed by the processor 1610, the determining, according to the RGB value corresponding to each planar grid, the height value of the corresponding position of each planar grid in the scene module includes:
determining the central position of the plane graph of the scene module, and determining the corresponding position of a plane grid, which is more than a preset distance from the central position, in the scene module as a boundary;
determining the corresponding position of the plane grid in the path plan in the scene module as a path;
setting the RGB value of the plane grid corresponding to the first area containing the path in the boundary as a first RGB value, and setting the RGB value of the plane grid corresponding to the second area not containing the path in the boundary as a second RGB value;
And determining a first height value corresponding to the first region according to a first RGB value of the plane grid corresponding to the first region, and determining a second height value corresponding to the second region according to a second RGB value of the plane grid corresponding to the second region.
In a possible implementation manner, in the instructions executed by the processor 1610, the performing, according to the height value of the corresponding position of each planar grid in each scene module plan in the corresponding position of each planar grid in the scene module, a 3D transformation operation on each planar grid in each scene module plan to construct a plurality of scene modules including paths corresponding to the path plan includes:
according to the height value of the corresponding position of each plane grid in each scene module plan in the scene module, placing a white box model block in unit volume at the corresponding position of each plane grid in each scene module plan in sequence to construct and obtain a white box scene module;
binding the pre-acquired virtual assets at preset positions in the white-box scene module to construct the scene module.
In a possible implementation manner, in the instructions executed by the processor 1610, the performing a 3D operation on the scene module plan according to the scene information in the scene module plan to construct a plurality of scene modules corresponding to a plurality of scene module plans includes:
Reading a first height parameter in the path plan view, which characterizes a height value of a corresponding position of the path region in the scene module, and reading a second height parameter in the non-path plan view, which characterizes a height value of a corresponding position of the non-path region in the scene module;
and executing 3D operation on the path plane graph according to the first height parameter, and executing 3D operation on the non-path plane graph according to the second height parameter so as to construct a plurality of scene modules corresponding to a plurality of scene module plane graphs.
In a possible implementation manner, after the constructing, in the instructions executed by the processor 1610, the scene modules including the paths corresponding to the path plan further include:
determining whether an included angle between a plane where the path is located and the bottom surface of the scene module exceeds a preset angle;
and in response to the included angle between the plane where the path is located and the bottom surface of the scene module exceeding the preset angle, adjusting the height value corresponding to the path so that the included angle between the path and the bottom surface of the scene module is smaller than or equal to the preset angle, and redetermining the RGB value of the planar grid corresponding to the path in the scene module plan according to the adjusted height value corresponding to the path.
In a possible implementation manner, in the instructions executed by the processor 1610, the determining path information corresponding to the path in each scene module includes:
for any of the plurality of scene modules,
determining intersections used for communicating with paths in other scene modules according to the extending directions of the paths in the scene modules, and determining intersection position information, intersection quantity information, intersection size and intersection opening directions corresponding to the intersections;
and determining path information corresponding to the path in each scene module according to the intersection position information, the intersection quantity information, the intersection size and the intersection opening direction corresponding to the intersection in each scene module.
In a possible implementation manner, in the instructions executed by the processor 1610, the splicing the scene modules corresponding to the intersection location information according to the scene requirement parameter to construct a target virtual scene includes:
determining the scene module demand quantity according to the scene demand parameters;
for any of the plurality of scene modules,
splicing other scene modules which are the same as the intersection quantity information and the intersection size on any side of the scene module and correspond to the intersection position information and the intersection opening direction of the scene module with the scene module to obtain paths which are in a preset angle interval and are communicated with each other, wherein the included angle of planes of two paths of a splicing position between the scene modules is in the preset angle interval;
And splicing the scene modules to communicate all paths in the scene modules, wherein the number of the scene modules subjected to splicing reaches the required number of the scene modules so as to construct and obtain the target virtual scene.
In a possible implementation manner, in the instructions executed by the processor 1610, the steps of splicing, with the scene module, other scene modules corresponding to the intersection position information and the intersection opening direction of the scene module, which are the same as the intersection number information and the intersection size of the scene module, so as to obtain a path of mutual communication between the scene modules include:
determining whether candidate scene modules with the same intersection number information and the same intersection size as those of the second side in the scene modules exist in the other scene modules with the intersection number information of each side larger than the preset number;
determining whether a target scene module is present in the candidate scene module, wherein the intersection opening direction of the first side is opposite to the intersection opening direction of the second side in the scene module, and the intersection position information of the first side is overlapped with the intersection position of the second side in the scene module, in response to the intersection number information of the first side and the candidate scene module, the intersection size of which is the same as the intersection number information of the second side in the scene module, being present in the other scene modules, the intersection number information of which is greater than the preset number;
And in response to a target scene module, in which the intersection opening direction of the first side is opposite to the intersection opening direction of the second side in the scene module, and the intersection position information of the first side overlaps with the intersection position of the second side in the scene module, splicing the second side of the scene module and the first side of the target scene module to obtain a mutually communicated path between the scene modules.
In a possible implementation manner, after the instructions executed by the processor 1610, in response to the number of intersections information on the first side and the candidate scene module having the same size as the number of intersections information on the second side and the same size as the intersections in the other scene modules having the number of intersections information greater than the preset number, the method further includes:
and in response to zero crossing number information of the first side and zero crossing number information of the second side, mutually splicing the second side of the scene module with the first side of any one of the candidate scene modules.
In a possible implementation manner, after the determining, by the processor 1610, whether the candidate scene module has the direction of the intersection opening on the first side opposite to the direction of the intersection opening on the second side, and the intersection position information on the first side overlaps with the intersection position on the second side, the method further includes:
Determining whether a target scene module is present in the candidate scene module, wherein the intersection opening direction of the first side is opposite to the intersection opening direction of the second side in the scene module, and/or the intersection position information of the first side is overlapped with the intersection position of the second side in the scene module; wherein the transforming operation at least comprises: rotating operation by taking a central shaft which passes through the central point of the scene module and is mutually perpendicular to the bottom surface of the scene module as a rotating shaft;
and in response to the existence of a target scene module in the candidate scene module, wherein the direction of an intersection opening of a first side is opposite to the direction of the intersection opening of a second side in the scene module after transformation operation, and the intersection position information of the first side is overlapped with the intersection position of the second side in the scene module, the second side of the scene module and the first side of the target scene module are spliced with each other to obtain a path which is communicated with each other between the scene modules.
In a possible implementation manner, after the scene modules corresponding to the path information are spliced according to the scene requirement parameters to construct a target virtual scene, the instructions executed by the processor 1610 further include:
generating path finding tracks distributed along the paths according to the paths in the target virtual scene;
determining whether at least two mutually non-communicated path-finding tracks exist;
and in response to the existence of at least two non-communicated path-finding tracks, replacing a scene module corresponding to the path-finding track with shorter length until at least two non-communicated path-finding tracks do not exist in the target virtual scene.
In a possible implementation manner, after the instructions executed by the processor 1610 until at least two tracks that are not mutually communicated are not present in the target virtual scene, the method further includes:
a plurality of collision bodies are arranged on the path-finding track; the collision body is used for bearing a movable virtual object so that the movable virtual object can move along any direction on the path-finding track;
creating a movable test virtual role in the target virtual scene, and configuring a test task to the test virtual role so that the test virtual role moves along the extending direction of the path-finding track according to the test task;
And responding to the test virtual character to traverse the path-finding track, and creating the movable virtual object at the corresponding position of the collision body according to the pre-acquired virtual asset.
In a possible implementation manner, in the instructions executed by the processor 1610, the creating the movable virtual object at the position corresponding to the collision body according to the virtual asset acquired in advance includes:
determining the area of a path corresponding to the track in the scene module;
and determining the distribution density of the movable virtual object created at the corresponding position of the collision body in each scene module according to the area of the path corresponding to the track in the scene module.
Based on the same inventive concept, corresponding to any of the above embodiments of the method, the present application further provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the virtual scene construction method according to any of the above embodiments.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The storage medium of the foregoing embodiments stores computer instructions for causing the computer to execute the virtual scene construction method according to any one of the foregoing embodiments, and has the advantages of the corresponding method embodiments, which are not described herein.
Based on the same inventive concept, corresponding to the virtual scene construction method described in any of the above embodiments, the present disclosure further provides a computer program product, which includes computer program instructions. In some embodiments, the computer program instructions may be executable by one or more processors of a computer to cause the computer and/or the processor to perform the virtual scene construction method. Corresponding to the execution subject corresponding to each step in each embodiment of the virtual scene construction method, the processor executing the corresponding step may belong to the corresponding execution subject.
The computer program product of the foregoing embodiments is configured to enable the computer and/or the processor to perform the virtual scene building method according to any one of the foregoing embodiments, and has the beneficial effects of corresponding method embodiments, which are not described herein again.
It can be appreciated that before using the technical solutions of the embodiments in the present application, the user is informed about the type, the use range, the use scenario, etc. of the related personal information in an appropriate manner, and the authorization of the user is obtained.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Therefore, the user can select whether to provide personal information to the software or hardware such as the electronic equipment, the application program, the server or the storage medium for executing the operation of the technical scheme according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization acquisition process is merely illustrative, and not limiting of the implementation of the present application, and that other ways of satisfying relevant legal regulations may be applied to the implementation of the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be implemented as a system, method, or computer program product. Thus, the present application may be embodied in the form of: all hardware, all software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software, is generally referred to herein as a "circuit," module, "or" system. Furthermore, in some embodiments, the present application may also be embodied in the form of a computer program product in one or more computer-readable media, which contain computer-readable program code.
Any combination of one or more computer readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive example) of the computer-readable storage medium could include, for example: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer, for example, through the internet using an internet service provider.
It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the application (including the claims) is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present application as described above, which are not provided in detail for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present application. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present application, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform on which the embodiments of the present application are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the present application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Accordingly, any omissions, modifications, equivalents, improvements and/or the like which are within the spirit and principles of the embodiments are intended to be included within the scope of the present application.

Claims (21)

1. A virtual scene construction method, the method comprising:
splicing a plurality of scene module plane diagrams according to a pre-acquired unit pixel module;
performing 3D (three-dimensional) operation on the scene module plan according to scene information in the scene module plan so as to construct a plurality of scene modules corresponding to a plurality of scene module plan; wherein, each scene module at least comprises: a path;
determining path information corresponding to the paths in each scene module; the method comprises the steps of,
And acquiring scene demand parameters, and splicing the scene modules corresponding to the path information according to the scene demand parameters to construct a target virtual scene.
2. The method of claim 1, wherein the unit pixel module comprises: a first type pixel and a second type pixel;
the step of splicing the plurality of scene module plane diagrams according to the pre-acquired unit pixel modules comprises the following steps:
determining path areas, path positions, non-path areas and non-path positions in the scene module plan according to the pre-acquired splicing requirement information;
splicing the first type pixels at the path positions in a plurality of initial plan views with preset sizes to obtain a path area corresponding to the path area;
splicing the second type of pixels at the non-path positions in the initial plan views to obtain a non-path region corresponding to the non-path area;
and determining the plurality of scene module plan views according to the path region and the non-path region.
3. The method of claim 2, wherein the determining the plurality of scene module plan views from the path region and the non-path region comprises:
Giving the path area according to a first pre-obtained pre-set map to obtain a path plan;
according to a second preset mapping obtained in advance, giving the non-path area so as to obtain a non-path plan;
and determining the plurality of scene module plan views according to the path plan view and the non-path plan view.
4. The method of claim 3, wherein performing a 3D-rendering operation on the scene module plan based on scene information in the scene module plan to construct a plurality of scene modules corresponding to a plurality of the scene module plan comprises:
equally dividing the path plane graph and the non-path plane graph in each scene module plane graph into a plurality of plane grids, and determining the scene information corresponding to each plane grid; wherein, the scene information at least comprises: RGB values corresponding to the planar grid;
determining the height value of the corresponding position of each planar grid in the scene module according to the RGB value corresponding to each planar grid;
and according to the height value of the corresponding position of each plane grid in each scene module plane graph in the scene module, performing 3D (three-dimensional) operation on each plane grid in each scene module plane graph to construct a plurality of scene modules containing paths corresponding to the path plane graph.
5. The method of claim 4, wherein determining the height value of each planar grid at the corresponding location in the scene module based on the RGB values corresponding to each planar grid comprises:
setting an initial height value corresponding to the minimum RGB value, and determining a difference value between the RGB value corresponding to each plane grid and the minimum RGB value;
and increasing the height value of the corresponding position of each planar grid in the scene module by a preset height value in response to the difference between the RGB value corresponding to each planar grid and the minimum RGB value reaching a preset multiple of the preset interval RGB value.
6. The method of claim 4, wherein determining the height value of each planar grid at the corresponding location in the scene module based on the RGB values corresponding to each planar grid comprises:
setting an initial height value corresponding to the maximum RGB value, and determining a difference value between the RGB value corresponding to each planar grid and the maximum RGB value;
and increasing the height value of the corresponding position of each planar grid in the scene module by a preset height value in response to the difference between the RGB value corresponding to each planar grid and the maximum RGB value reaching a preset multiple of the preset interval RGB value.
7. The method of claim 4, wherein determining the height value of each planar grid at the corresponding location in the scene module based on the RGB values corresponding to each planar grid comprises:
determining the central position of the plane graph of the scene module, and determining the corresponding position of a plane grid, which is more than a preset distance from the central position, in the scene module as a boundary;
determining the corresponding position of the plane grid in the path plan in the scene module as a path;
setting the RGB value of the plane grid corresponding to the first area containing the path in the boundary as a first RGB value, and setting the RGB value of the plane grid corresponding to the second area not containing the path in the boundary as a second RGB value;
and determining a first height value corresponding to the first region according to a first RGB value of the plane grid corresponding to the first region, and determining a second height value corresponding to the second region according to a second RGB value of the plane grid corresponding to the second region.
8. The method of claim 4, wherein performing a 3D visualization operation on each planar grid in each of the scene module plan based on the height values of the corresponding locations in the scene module for each planar grid in each of the scene module plan to construct a plurality of the scene modules including paths corresponding to the path plan, comprises:
According to the height value of the corresponding position of each plane grid in each scene module plan in the scene module, placing a white box model block in unit volume at the corresponding position of each plane grid in each scene module plan in sequence to construct and obtain a white box scene module;
binding the pre-acquired virtual assets at preset positions in the white-box scene module to construct the scene module.
9. The method of claim 3, wherein performing a 3D-rendering operation on the scene module plan based on scene information in the scene module plan to construct a plurality of scene modules corresponding to a plurality of the scene module plan comprises:
reading a first height parameter in the path plan view, which characterizes a height value of a corresponding position of the path region in the scene module, and reading a second height parameter in the non-path plan view, which characterizes a height value of a corresponding position of the non-path region in the scene module;
and executing 3D operation on the path plane graph according to the first height parameter, and executing 3D operation on the non-path plane graph according to the second height parameter so as to construct a plurality of scene modules corresponding to a plurality of scene module plane graphs.
10. The method of claim 1, wherein after the constructing the plurality of scene modules including paths corresponding to the path plan, further comprising:
determining whether an included angle between a plane where the path is located and the bottom surface of the scene module exceeds a preset angle;
and in response to the included angle between the plane where the path is located and the bottom surface of the scene module exceeding the preset angle, adjusting the height value corresponding to the path so that the included angle between the plane where the path is located and the bottom surface of the scene module is smaller than or equal to the preset angle, and re-determining the RGB value of the planar grid corresponding to the path in the scene module plan according to the adjusted height value corresponding to the path.
11. The method of claim 1, wherein determining path information corresponding to the path in each scene module comprises:
for any of the plurality of scene modules,
determining intersections used for communicating with paths in other scene modules according to the extending directions of the paths in the scene modules, and determining intersection position information, intersection quantity information, intersection size and intersection opening directions corresponding to the intersections;
And determining path information corresponding to the path in each scene module according to the intersection position information, the intersection quantity information, the intersection size and the intersection opening direction corresponding to the intersection in each scene module.
12. The method of claim 11, wherein the splicing the scene modules corresponding to the intersection location information according to the scene demand parameters to construct a target virtual scene comprises:
determining the scene module demand quantity according to the scene demand parameters;
for any of the plurality of scene modules,
splicing other scene modules which are the same as the intersection quantity information and the intersection size on any side of the scene module and correspond to the intersection position information and the intersection opening direction of the scene module with the scene module to obtain paths which are in a preset angle interval and are communicated with each other, wherein the included angle of planes of two paths of a splicing position between the scene modules is in the preset angle interval;
and splicing the scene modules to communicate all paths in the scene modules, wherein the number of the scene modules subjected to splicing reaches the required number of the scene modules so as to construct and obtain the target virtual scene.
13. The method of claim 11, wherein the splicing other scene modules corresponding to the intersection position information and the intersection opening direction of the scene module with the scene module to obtain the mutually communicated paths between the scene modules is the same as the intersection number information and the intersection size of the scene module, comprising:
determining whether candidate scene modules with the same intersection number information and the same intersection size as those of the second side in the scene modules exist in the other scene modules with the intersection number information of each side larger than the preset number;
determining whether a target scene module is present in the candidate scene module, wherein the intersection opening direction of the first side is opposite to the intersection opening direction of the second side in the scene module, and the intersection position information of the first side is overlapped with the intersection position of the second side in the scene module, in response to the intersection number information of the first side and the candidate scene module, the intersection size of which is the same as the intersection number information of the second side in the scene module, being present in the other scene modules, the intersection number information of which is greater than the preset number;
And in response to a target scene module, in which the intersection opening direction of the first side is opposite to the intersection opening direction of the second side in the scene module, and the intersection position information of the first side overlaps with the intersection position of the second side in the scene module, splicing the second side of the scene module and the first side of the target scene module to obtain a mutually communicated path between the scene modules.
14. The method of claim 13, wherein said responding to said candidate scene module having said number of intersections information and said size of said intersections on a first side being the same as said number of intersections information and said size of said intersections on a second side of said scene module after said number of intersections information being greater than a preset number of said other scene modules further comprises:
and in response to zero crossing number information of the first side and zero crossing number information of the second side, mutually splicing the second side of the scene module with the first side of any one of the candidate scene modules.
15. The method of claim 13, wherein after determining whether there is a target scene module in the candidate scene module having a first side of intersection opening direction opposite to the intersection opening direction of a second side of the scene modules, and the intersection position information of the first side overlapping the intersection position of the second side of the scene modules, further comprising:
Determining whether a target scene module is present in the candidate scene module, wherein the intersection opening direction of the first side is opposite to the intersection opening direction of the second side in the scene module, and/or the intersection position information of the first side is overlapped with the intersection position of the second side in the scene module; wherein the transforming operation at least comprises: rotating operation by taking a central shaft which passes through the central point of the scene module and is mutually perpendicular to the bottom surface of the scene module as a rotating shaft;
and in response to the existence of a target scene module in the candidate scene module, wherein the direction of an intersection opening of a first side is opposite to the direction of the intersection opening of a second side in the scene module after transformation operation, and the intersection position information of the first side is overlapped with the intersection position of the second side in the scene module, the second side of the scene module and the first side of the target scene module are spliced with each other to obtain a path which is communicated with each other between the scene modules.
16. The method according to claim 1, wherein after the splicing the scene modules corresponding to the path information according to the scene requirement parameters to construct a target virtual scene, the method further comprises:
generating path finding tracks distributed along the paths according to the paths in the target virtual scene;
determining whether at least two mutually non-communicated path-finding tracks exist;
and in response to the existence of at least two non-communicated path-finding tracks, replacing a scene module corresponding to the path-finding track with shorter length until at least two non-communicated path-finding tracks do not exist in the target virtual scene.
17. The method of claim 1, wherein said until after at least two of said seek tracks that are not in communication with each other are not present in said target virtual scene, further comprises:
a plurality of collision bodies are arranged on the path-finding track; the collision body is used for bearing a movable virtual object so that the movable virtual object can move along any direction on the path-finding track;
creating a movable test virtual role in the target virtual scene, and configuring a test task to the test virtual role so that the test virtual role moves along the extending direction of the path-finding track according to the test task;
And responding to the test virtual character to traverse the path-finding track, and creating the movable virtual object at the corresponding position of the collision body according to the pre-acquired virtual asset.
18. The method of claim 17, wherein creating the movable virtual object at the collision volume corresponding location from the pre-acquired virtual asset comprises:
determining the area of a path corresponding to the track in the scene module;
and determining the distribution density of the movable virtual object created at the corresponding position of the collision body in each scene module according to the area of the path corresponding to the track in the scene module.
19. A virtual scene building apparatus, the apparatus comprising:
the splicing module is configured to splice a plurality of scene module plane diagrams according to the pre-acquired unit pixel modules;
a 3D-processing module configured to perform 3D-processing on the scene module plan according to scene information in the scene module plan to construct a plurality of scene modules corresponding to a plurality of the scene module plan; wherein, each scene module at least comprises: a path;
The determining module is configured to determine path information corresponding to the paths in each scene module;
the construction module is configured to acquire scene demand parameters and splice the scene modules corresponding to the path information according to the scene demand parameters so as to construct a target virtual scene.
20. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 18 when the program is executed by the processor.
21. A computer readable storage medium storing computer instructions for causing the computer to implement the method of any one of claims 1 to 18.
CN202311437770.8A 2023-10-31 2023-10-31 Virtual scene construction method and device, electronic equipment and storage medium Pending CN117398685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311437770.8A CN117398685A (en) 2023-10-31 2023-10-31 Virtual scene construction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311437770.8A CN117398685A (en) 2023-10-31 2023-10-31 Virtual scene construction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117398685A true CN117398685A (en) 2024-01-16

Family

ID=89494145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311437770.8A Pending CN117398685A (en) 2023-10-31 2023-10-31 Virtual scene construction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117398685A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117953175A (en) * 2024-03-26 2024-04-30 湖南速子文化科技有限公司 Method, system, equipment and medium for constructing virtual world data model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117953175A (en) * 2024-03-26 2024-04-30 湖南速子文化科技有限公司 Method, system, equipment and medium for constructing virtual world data model

Similar Documents

Publication Publication Date Title
CN107545788B (en) Goods electronic sand map system is deduced based on the operation that augmented reality is shown
CN107358649B (en) Processing method and device of terrain file
CN112862935B (en) Game role movement processing method and device, storage medium and computer equipment
CN117398685A (en) Virtual scene construction method and device, electronic equipment and storage medium
CN112717404B (en) Virtual object movement processing method and device, electronic equipment and storage medium
CN108465241A (en) Processing method, device, storage medium and the electronic equipment of game sound reverberation
CN105741340B (en) A kind of transmission line of electricity three-dimensional scenic emulation mode and system for web page display
CN108959434A (en) A kind of scene fusion visualization method under more geographical information platforms
Virtanen et al. Browser based 3D for the built environment
Empler Cultural heritage: Displaying the Forum of Nerva with new technologies
CN115120980A (en) Game scene generation method and device, storage medium and electronic device
CN108876920B (en) Geometric structure data processing method and device for three-dimensional assembly splicing
Ehtemami et al. Overview of Visualizing Historical Architectural Knowledge through Virtual Reality
Szwoch et al. 3D optical reconstruction of building interiors for game development
Ehtemami et al. Review of Visualizing Historical Architectural Knowledge through Virtual Reality
Xing et al. MR environments constructed for a large indoor physical space
Banfi et al. Virtual access to heritage through scientific drawing, semantic models and VR-experience of the Stronghold of Arquata del Tronto after the earthquake
CN116899216B (en) Processing method and device for special effect fusion in virtual scene
CN113181642B (en) Method and device for generating wall model with mixed materials
Tauscher et al. Map Generation for Retro Gaming Engines
Tully Contributions to Big Geospatial Data Rendering and Visualisations
Christen 3D Computer Graphics with Python
Diehl et al. Applications of serious games in Geovisualization
Bender et al. HW3D: A tool for interactive real-time 3D visualization in GIS supported flood modelling
CN117839206A (en) Data processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination