WO2022166681A1 - 一种虚拟场景生成方法、装置、设备和存储介质 - Google Patents

一种虚拟场景生成方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2022166681A1
WO2022166681A1 PCT/CN2022/073766 CN2022073766W WO2022166681A1 WO 2022166681 A1 WO2022166681 A1 WO 2022166681A1 CN 2022073766 W CN2022073766 W CN 2022073766W WO 2022166681 A1 WO2022166681 A1 WO 2022166681A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
target
division
line
objects
Prior art date
Application number
PCT/CN2022/073766
Other languages
English (en)
French (fr)
Inventor
蔡家闰
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2022166681A1 publication Critical patent/WO2022166681A1/zh
Priority to US17/985,639 priority Critical patent/US20230071213A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/26Visual data mining; Browsing structured data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the present application relates to the field of computer technologies, and in particular, to a method, apparatus, device and storage medium for generating a virtual scene.
  • simulated or created virtual communities, virtual villages, virtual cities, etc. can be generated; in another example, simulated or created virtual systems, virtual galaxies, virtual universes, etc. can be generated; and so on.
  • the generated virtual scene can be used for various purposes, such as being used for film and television special effects, game scene construction, and so on.
  • Embodiments of the present application provide a method, apparatus, device, and storage medium for generating a virtual scene, which can improve the generation efficiency of a virtual scene.
  • An embodiment of the present application provides a method for generating a virtual scene, executed by an electronic device, including:
  • a scene division network is generated in the initial virtual scene, wherein the scene division network includes at least one division mark data, and the division mark data is used to divide the initial virtual scene;
  • the embodiment of the present application also provides a virtual scene generation device, including:
  • an information acquisition unit configured to acquire scene feature information corresponding to the target virtual scene to be generated
  • a network generation unit configured to generate a scene division network in an initial virtual scene based on the scene feature information, wherein the scene division network includes at least one division mark data, and the division mark data is used to divide the initial virtual scene ;
  • a set generating unit configured to generate a set of scene objects to be added to the scene division network, wherein the set of scene objects includes at least one scene object;
  • an attribute matching unit configured to perform attribute matching on the scene object and the division mark data to obtain candidate scene objects allocated to the division mark data
  • a target screening unit configured to screen out target scene objects from the candidate scene objects according to the positional association information between the candidate scene objects and the division mark data
  • a target allocation unit configured to match the target scene object with the division mark data to generate a target virtual scene.
  • the embodiments of the present application further provide a computer-readable storage medium on which a computer program is stored, wherein when the computer program is executed by the processor, the steps of the virtual scene generation method shown in the embodiments of the present application are implemented. .
  • an embodiment of the present application further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements the computer program when executing the computer program.
  • an electronic device including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements the computer program when executing the computer program.
  • the embodiments of the present application further provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the above-mentioned virtual scene generation method.
  • FIG. 1 is a schematic diagram of a scene of a method for generating a virtual scene provided by an embodiment of the present application
  • FIG. 2 is a flowchart of a method for generating a virtual scene provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of an application of a line constraint rule provided by an embodiment of the present application
  • FIG. 6 is a schematic diagram of each building sub-module provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a combined building provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an interface for setting building attributes through Houdini provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an urban road network provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a road network after attribute matching provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a road network provided by an embodiment of the present application after screening out candidate buildings that fail to be detected;
  • FIG. 12 is a schematic diagram of a road network after screening target buildings according to object density constraint parameters provided by an embodiment of the present application;
  • FIG. 13 is a schematic diagram of a road network provided by an embodiment of the present application after 50% of the target buildings in each category in FIG. 12 are screened;
  • FIG. 14 is a schematic diagram of a road network provided by an embodiment of the present application after screening target buildings according to priorities;
  • 15 is another schematic flowchart of a virtual scene generation method provided by an embodiment of the present application.
  • 16 is a schematic flowchart of generating urban roads in Houdini software provided by an embodiment of the present application.
  • 17 is a schematic diagram of a node network connected and used in Houdini provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of a virtual city provided by an embodiment of the present application.
  • 19 is a schematic diagram of a virtual city presented in the Unreal Engine game engine provided by an embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of an apparatus for generating a virtual scene provided by an embodiment of the present application.
  • FIG. 21 is another schematic structural diagram of a virtual scene generating apparatus provided by an embodiment of the present application.
  • FIG. 22 is another schematic structural diagram of a virtual scene generating apparatus provided by an embodiment of the present application.
  • FIG. 23 is another schematic structural diagram of a virtual scene generating apparatus provided by an embodiment of the present application.
  • FIG. 24 is another schematic structural diagram of a virtual scene generating apparatus provided by an embodiment of the present application.
  • FIG. 25 is another schematic structural diagram of a virtual scene generating apparatus provided by an embodiment of the present application.
  • FIG. 26 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Embodiments of the present application provide a method, apparatus, device, and storage medium for generating a virtual scene.
  • the embodiments of the present application provide a virtual scene generating apparatus suitable for electronic equipment.
  • the electronic device may be a device such as a terminal or a server
  • the terminal may be a device such as a mobile phone, a tablet computer, a notebook computer, and the like.
  • the server can be a single server or a server cluster consisting of multiple servers.
  • the embodiments of the present application will be introduced by taking the virtual scene generation method being executed by a terminal as an example.
  • an embodiment of the present application provides a virtual scene generation system, including a terminal 10, wherein the terminal 10 can be used to generate a target virtual scene, for example, can be used to generate a simulated or created virtual community, virtual village, virtual city ; another example, virtual systems, virtual galaxies, virtual universes, etc. that can be used to generate simulations or designs.
  • the terminal 10 may obtain scene feature information corresponding to the target virtual scene to be generated, and based on the scene feature information, generate a scene division network in the initial virtual scene, where the initial virtual scene may be generated or constructed in the target virtual scene. In the process of , it is used as a container for carrying division mark data and scene objects, so that the target virtual scene can be assisted by adding division mark data and scene objects in the initial virtual scene.
  • the generated scene division network may include at least one division mark data, and the division mark data may be used to divide the initial virtual scene.
  • the terminal 10 may generate a scene object set to be added to the scene dividing network, where the scene object set includes at least one scene object.
  • the scene objects may be buildings, vegetation, and the like.
  • the terminal 10 may perform attribute matching between the scene object and the division mark data in the scene division network, so as to obtain a candidate scene object allocated to the division mark data. And according to the position correlation information between the candidate scene object and the division mark data, the target scene object is screened out from the candidate scene objects allocated to the division mark data. So that the terminal 10 can add the target scene object to the scene division network by matching the target scene object with the division mark data, and generate the target virtual scene.
  • the virtual scene generation method may be executed jointly by the terminal and the server.
  • the virtual scene generation system includes a terminal 10 and a server 20 , wherein the terminal 10 can send scene feature information corresponding to the target virtual scene to be generated to the server 20 .
  • the server 20 may obtain scene feature information of the target virtual scene to be generated; based on the scene feature information, a scene division network is generated in the initial virtual scene, wherein the scene division network may include at least one division mark data, the division The mark data can be used to divide the initial virtual scene; generate a set of scene objects to be added to the scene division network, wherein the scene object set can include at least one scene object; attribute the scene object and the division mark data in the scene division network.
  • the tag data is matched to add the target scene object to the scene division network to generate the target virtual scene.
  • the server 20 can send the generated scene rendering data of the target virtual scene to the terminal 10, so that the terminal 10 can display the relevant pictures of the target virtual scene based on the scene rendering data.
  • artificial intelligence technology has been researched and applied in many fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, autonomous driving, drones It is believed that with the development of technology, artificial intelligence technology will be applied in more fields and play an increasingly important value.
  • a virtual scene generation method provided by an embodiment of the present application involves technologies such as artificial intelligence computer vision, and the method is executed by an electronic device, for example, by the terminal 10 or the server 20 in FIG. 1 , or by the terminal 10 Executed together with the server 20 .
  • the embodiments of the present application are described by taking the virtual scene generation method being executed by the terminal as an example.
  • the virtual scene generation method is executed by the virtual scene generation device integrated in the terminal.
  • the specific process of the virtual scene generation method may be as follows :
  • the scene in real life refers to a certain task action that occurs in a certain time and space or a specific life picture formed by the relationship between the characters. Relatively speaking, it is a staged horizontal display of the action and life events of the characters. The specific development process of the performance of the content of the plot.
  • the target virtual scene to be generated involved in this application may be a virtual scene to be modeled, specifically, a simulated virtual scene obtained by using computer vision technology to simulate a scene in real life, or a computer Visual technology to recreate or design a new virtual scene after modeling.
  • the target virtual scene can be a simulated or created virtual community, a virtual village, a virtual city, etc.; another example can be a simulated or created virtual system, a virtual galaxy, a virtual universe, etc.; and so on.
  • the generated target virtual scene can be used for various purposes, for example, it can be used for film and television special effects, game scene construction, and so on.
  • the method for generating a virtual scene may be introduced by taking the target virtual scene as a virtual city as an example, wherein the target virtual scene to be generated may be a simulated virtual city, a created virtual city, or a combination of real real-world data-assisted generation of virtual cities, etc.
  • the scene feature information is used to describe the characteristics of the target virtual scene, and may be related information describing the multi-dimensional scene characteristics of the target virtual scene, such as geographic dimension, population density dimension, functional area dimension, and building height dimension.
  • scene feature information in the geographical dimension including altitude distribution, land distribution, water resource distribution, vegetation distribution, etc.
  • scene feature information can also describe the characteristics of the target virtual scene in a social statistical sense, including Population density distribution, functional area distribution (taking the target virtual scene as a virtual city as an example, functional areas may include residential areas, commercial areas, mixed areas, etc.), height distribution (taking the target virtual scene as a virtual city as an example, height distribution can be the height distribution of buildings in the city), etc.
  • the data format of the scene feature information can be various, for example, it can be various data formats such as data table, image, and audio.
  • acquiring the scene feature information corresponding to the target virtual scene to be generated can be achieved by acquiring the population density distribution map of the virtual city.
  • the scene feature information corresponding to the target virtual scene to be generated can be extracted from a database; another example, it can be requested from a server or a network; another example, it can be obtained through a data acquisition device , such as a camera or video camera, instant acquisition or search; another example, can be input by the user, and so on.
  • the initial virtual scene can be used as a basic container in the process of generating or constructing the target virtual scene.
  • the initial virtual scene can be used as a container for dividing mark data and scene objects to be generated, so that the target virtual scene can be generated by adding the dividing mark data and scene objects in the initial virtual scene.
  • the initial virtual scene may be a coordinate system, such as a three-dimensional coordinate system or a two-dimensional coordinate system, and the coordinate system may be a blank coordinate system, or may include a non-empty coordinate system with existing scene objects.
  • the target virtual scene may be constructed in graphics software, and the initial virtual scene may be the initialization state of the graphics software when the target virtual scene is generated in the graphics software.
  • the target virtual scene can be constructed in the 3D computer graphics software Houdini
  • the initial virtual scene can be the 3D coordinate system in Houdini's interactive interface
  • the 3D coordinate system can be a blank 3D coordinate system, that is, in Houdini from the blank Start creating the target virtual scene.
  • the three-dimensional coordinate system may also be a non-empty three-dimensional coordinate system, that is, the target virtual scene is continuously constructed in the existing virtual scene through Houdini.
  • the division mark data is the data that serves as the division mark in the initial virtual scene, including the visual division line and the non-visual mark data.
  • the visual division line can be a line segment, a straight line, a dotted line, a curve, etc.
  • the division mark data can be presented as part of the final generated target virtual scene; for another example, the non-visual mark data can be coordinates data, length data, etc.
  • the division mark data may not be presented as part of the final generated target virtual scene, but only as data required to assist in generating the target virtual scene.
  • the visualized dividing lines may be roads in the virtual city, and the virtual city may be divided into different areas through the roads.
  • the division mark data may be: Presented as part of the final generated virtual city.
  • the non-visualized marked data may be marked data required to divide the virtual universe into different parts, such as different galaxies or different spatial regions.
  • the divided label data may not be presented as part of the virtual universe, but only as label data needed to assist in generating the virtual universe.
  • the scene division network is a network composed of division mark data.
  • the scene division network may further include nodes formed by the intersection or intersection of each division mark data, or nodes that exist independently, which are not limited in this embodiment of the present application.
  • a scene dividing network can be generated in the initial virtual scene based on the scene feature information.
  • the scene division network can be generated by generating a basic division network in an initial virtual scene, and further adjusting the division lines in the basic division network.
  • the step of "generating a scene division network in the initial virtual scene based on scene feature information" may include:
  • a basic division network is generated in the initial virtual scene, wherein the basic division network includes at least one division line to be adjusted;
  • the basic division network is the basic network required for generating the scene division network.
  • the basic division network may be a network composed of at least one division line to be adjusted, so that the scene can be obtained by adjusting the division line to be adjusted. Divide the network.
  • the corresponding basic division network may be the basic road network of the virtual city.
  • the route distribution pattern can be used to guide the generation of the basic road network of the virtual city, so that the generated basic road network follows the road distribution pattern.
  • a basic road network can be generated first, which is close to the final urban road network on a large scale and still needs to be adjusted later.
  • the basic road network has a similar road distribution compared to the final urban road network, the details of each road segment still need to be adjusted.
  • adjusting the divided lines may refer to accepting, rejecting and modifying the divided lines under a certain range of local restrictions. These adjustments are all aimed at correcting small-scale errors to improve the local consistency of the basic segmentation network, so as to obtain the final scene segmentation network.
  • the step "Based on the scene feature information, in the initial virtual scene Generate a base partition network" which can include:
  • a basic partition network is generated in the initial virtual scene.
  • the route distribution mode may be the mode followed by describing the distribution of the divided routes in the target virtual scene.
  • the route distribution mode may be the mode followed by describing the distribution of roads in the virtual city. For example, natural mode, grid mode, radiation mode, altitude-oriented mode, etc.
  • the distribution of roads can be consistent with the population density distribution of a virtual city, that is, the distribution of the urban road network is consistent with the natural increase in population density.
  • natural patterns are often found in old urban blocks;
  • the distribution of roads can follow a given global angle or local angle, and the maximum length and width of a single area block. For example, when the distribution of roads follows a grid mode, a large number of the rectangular block;
  • roads can be generated radially from the center, making the resulting road network similar to a radial.
  • a radial pattern is common where there is a city center where roads are generated radially from the city center ;
  • the generation of the road is guided by the altitude of various places in the virtual city, for example, the altitude-oriented mode is commonly used in areas with large differences in ground elevation; and so on.
  • the line distribution pattern required to generate the basic division network can include multiple patterns.
  • the weight assignment of each line distribution pattern can be performed at different positions in the initial virtual scene. In this way, based on different weights, Multiple line distribution patterns are considered to varying degrees. In this way, multiple line distribution patterns can be mixed in the initial virtual scene to generate a basic division network, so that the final generated scene division network is closer to the characteristics of the actual city.
  • the line distribution mode may be determined in a variety of ways, for example, it may be determined by user designation, or, for example, may be determined by system configuration; and so on.
  • the engineer when an engineer applies the virtual scene generation method described in this application to generate a virtual city, the engineer can select a required route distribution mode by analyzing business requirements.
  • information conversion may be performed on the scene feature information of the target virtual scene to obtain tensor information corresponding to the scene feature information.
  • the tensor information can be expressed in various forms, for example, it can be presented in the form of a tensor field.
  • a tensor field is a generalization of a scalar field or a vector field. In the tensor field, each space can be given separately. Point assigns a scalar or vector.
  • the target virtual scene may be a virtual city
  • the scene feature information may be the population density distribution data of the virtual city. Then, the population density distribution data may be converted to obtain a corresponding tensor field.
  • tensors There are various data structures of tensors. For example, scalars are 0-order tensors, vectors are first-order tensors, two-dimensional matrices are second-order tensors, and solid matrices are third-order tensors. Therefore, there may be various ways of performing information conversion on scene feature information to obtain tensor information.
  • the data structure of the tensor can be a two-dimensional matrix
  • the tensor information corresponding to the scene feature information can be obtained by converting the data in the scene feature information into a corresponding two-dimensional matrix.
  • the scene feature information may be population density distribution data of a virtual city, and each data therein may be converted into a corresponding two-dimensional matrix, thereby obtaining a tensor field corresponding to the population density distribution data.
  • the line distribution pattern and the tensor information can be further generated based on the line distribution pattern and the tensor information.
  • the basic division network specifically, the step of "generating a basic division network in the initial virtual scene based on the line distribution pattern and the tensor information" may include:
  • a basic division network that obeys the line distribution pattern is generated, wherein the basic division network includes at least one division line to be corrected;
  • geometric correction is performed on the division line to be corrected, and the corrected division line is obtained as the division line to be adjusted in the basic division network.
  • a basic division network generation module that takes the receiving line distribution pattern as a parameter can be designed, and the generation module is used to generate a basic division network that obeys the line distribution pattern. .
  • the "pattern” parameter indicates the line distribution pattern, and "new road” indicates a division line to be corrected in the basic division network.
  • the generating module can accept other auxiliary parameters in addition to the line distribution mode as a parameter, and generate the basic division network in combination with the line distribution mode.
  • the auxiliary parameters can include branch probability and the like.
  • the generation module can be designed based on the idea of L-system.
  • the L system (Lindenmayer System, L System) is a string rewriting mechanism that is widely used in the research and modeling of plant growth processes.
  • the generation module of the basic division network can be designed based on the idea of the L system, and the line distribution pattern required for generating the basic division network is accepted as a parameter, so as to generate the basic division that obeys the line distribution pattern in the initial virtual scene network.
  • the generation module generates divided routes in a wide range according to the accepted parameters, for example, according to the population distribution density data and branch probability, generates roads from various places to the city center, and then adjusts the generated roads.
  • the basic division network is only generated based on the line distribution pattern, and the scene feature information of the target virtual scene is not taken into account. Therefore, the division lines in the basic division network are likely to be different from the scene characteristics of the target virtual scene. Inconsistent, that is, the basic division network thus generated includes at least one piece of division mark data to be corrected.
  • the target virtual scene may be a virtual city
  • the line distribution pattern required for generating the basic division network may be a grid pattern
  • the scene feature information describing the target virtual scene may be population density distribution data.
  • the composition of the basic division network may include several rectangular blocks, but the distribution of the division lines may be inconsistent with the population density distribution of the virtual city. Among them, there can be various situations of inconsistency. For example, instead of generating more dividing lines in areas with high population density, evenly distributed dividing lines are generated; for another example, the angle of the generated dividing lines is not related to the population density. The direction of density change in the distribution is consistent; and so on.
  • the geometric correction refers to the correction performed on the divided lines from the geometric dimension, for example, adjusting the geometric features of the divided lines, such as the angle, length, position, width, etc. of the divided lines.
  • the geometric correction of the division lines in the basic division network based on the scene feature information may be implemented according to the tensor information corresponding to the scene feature information.
  • the target virtual scene can be a virtual city
  • the scene feature information can be population density distribution data.
  • the angle of dividing lines in the basic division network can be adjusted according to the tensor information corresponding to the population density distribution data, so as to realize the Geometric correction is performed on the divided lines to obtain a divided line that conforms to both the line distribution pattern and the urban population density distribution.
  • “road” represents the division line to be corrected
  • the core idea is “rotate road direction with angle deviation”, that is, by comparing the angle deviation "angle deviation” between the division line and the tensor information corresponding to the population density distribution data (population) ", to adjust the angle "road direction” of dividing the line.
  • the corrected division lines may be further adjusted in combination with other auxiliary parameters.
  • the auxiliary parameters may include preset Population thresholds, etc., adjustments can include adding or deleting divided lines, etc.
  • new road is the divided line after geometric correction
  • new road population is the population density corresponding to the divided line
  • branch threshold and “threshold” both refer to the preset population threshold, therefore, you can By comparing “new road population” with “branch threshold” and “new road population” with “threshold” to determine whether to add a branch or main road.
  • the final basic division network in the initial virtual scene can be correspondingly generated.
  • the division lines in the basic division network may be further adjusted, so as to generate the final scene division network subsequently.
  • the divided line may be adjusted based on the line intersection information of the divided line in the basic divided network to obtain an adjusted divided line.
  • the line intersection information may be related information describing the intersection situation between divided lines in the basic division network.
  • the line intersection information of the divided line may be whether the divided line intersects with other divided lines, or whether the divided line is close to the intersection of the lines within a certain range, or whether the divided line is close to other divided lines within a certain range. without intersecting, and so on.
  • the step of "adjusting the division line to be adjusted based on the line intersection information of the division line to be adjusted in the basic division network to obtain the adjusted division line" may include:
  • the target divided line is adjusted to obtain the adjusted divided line.
  • the line constraint rule is a related rule that constrains the divided line.
  • it can be a related rule that constrains the geometric characteristics of the divided line, such as angle, length, position, etc.; Intersections between lines to generate constraint rules for line intersections; and so on.
  • the target divided line may be adjusted by following the line constraint rule to obtain an adjusted divided line.
  • the line constraint rule may be a constraint rule for generating a line intersection point based on the intersection between the divided line and other divided lines.
  • the divided lines are roads in the virtual city, and the generated line intersections can be correspondingly road intersections in the virtual city.
  • the line constraint rule may be that if it is detected that two divided lines intersect, a line junction is generated.
  • a line junction is generated.
  • the intersection of two roads 301 and 302 is detected in the left figure, then the road 302 can be adjusted, as shown in the right figure, a line intersection 303 is generated, and the length of the road 302 is adjusted Shortened to 3-line junction 03.
  • the line constraint rule may be that if it is detected that the end point of the divided line is close to the existing line junction within a certain range, then the divided line is extended to reach the line junction.
  • the end point 402 of a road 401 is detected in the left figure. Within the range shown by the dotted circle, it is close to an existing line junction 403, and the road 401 can be detected. Adjustments are made, as shown on the right, by extending the road 401 so that the road 401 reaches the line junction 403 .
  • the line constraint rule may be that if it is detected that a divided line is close to other divided lines within a certain range, then the divided line is extended to other divided lines to generate a line junction.
  • a road 501 is detected to be close to other roads 502 within the range indicated by the dotted circle in the left figure, then the road 501 can be partially adjusted, as shown in the right figure, by The road 501 is extended so that the road 501 and the road 502 intersect, and a line junction 503 is created.
  • line constraint rules can be set based on business requirements. The above are only some examples of the line constraint rules, not all cases.
  • an adjusted division line can be obtained, and therefore, a scene division network composed of the adjusted division lines is also determined.
  • the scene objects may be content objects in the target virtual scene.
  • the content objects include buildings, characters, animal characters, vegetation, water resources, etc.;
  • the content objects include celestial bodies, probes, satellites, and so on.
  • the scene object set may be a set including scene objects in the target virtual scene.
  • the scene object set may be a set including all scene objects in the target virtual scene; for another example, the scene object set may be a set including scene objects under a certain object category in the target virtual scene, such as a building set.
  • the scene object may be generated by assembling sub-modules of the scene object to be generated.
  • the step "generating a set of scene objects to be added to the scene dividing network" may include:
  • the sub-modules are combined module to obtain the combined scene object
  • a scene object set is generated.
  • the sub-modules of the scene object to be generated may be part of the scene object to be formed.
  • the building can be divided into different sub-modules, such as walls, windows, corners, gates, etc., according to the size information of the parts constituting the building.
  • the combination rule may be a rule that should be followed when describing the combination of submodules.
  • the combination rule of the sub-modules may be to combine the sub-modules from the inside to the outside on the basis of the main body of the building, so as to obtain the combined building; and so on.
  • composition rules can be set based on business requirements.
  • the module parameters may be related parameters of the sub-module, and information such as the shape of the sub-module and the combination position may be described by the module parameters.
  • module parameters may include size parameters, position parameters, color parameters of submodules; and so on.
  • the scene object may be a building.
  • each building sub-module of the building to be generated shown in FIG. 6 may be obtained.
  • the module parameters of each sub-module such as position parameters, size parameters, etc. , the sub-modules are combined to obtain the combined building shown in the right figure of Figure 7.
  • scene objects in the target virtual scene can be generated, thereby obtaining a set of scene objects to be added to the scene dividing network.
  • the attribute matching refers to matching the object attribute of the scene object with the line attribute of the division mark data to determine whether the scene object is suitable to be allocated in the area corresponding to the division mark data.
  • the target virtual scene can be a virtual city
  • the scene objects can be buildings
  • the marking data can be roads in the city
  • each road has corresponding attributes, such as the population density that the road is suitable for carrying , the width of the road, the road belongs to a commercial area or a residential area, etc.
  • each building also has its corresponding attributes, such as the maximum population density that the building is suitable for, and the minimum population that the building needs to carry. Density, the building class to which the building belongs, the architectural style to which the building belongs, the building density for which the building group is suitable; and so on.
  • the building set to be added to the urban road network it can be determined which buildings are candidates for assignment to road A in the building set by performing attribute matching between the buildings and road A in the urban road network building.
  • allocating the candidate scene object to the division mark data refers to establishing an association relationship between the candidate scene object and the division mark data in the spatial position.
  • the target virtual scene is a virtual city
  • the candidate scene objects can be candidate buildings
  • the divided marking data can be roads in the virtual city.
  • assigning the candidate building A to the road B refers to establishing the spatial relationship between the candidate building A and the road B.
  • the candidate building A needs to be placed adjacent to the road B; It is stipulated that the candidate building A needs to be placed in the block corresponding to the road B; it can also be stipulated that the candidate building A is not allowed to be placed on the road B, otherwise the road B will be blocked, and so on.
  • the step "matching the attributes of the scene object and the division mark data to obtain candidate scene objects assigned to the division mark data" may include:
  • a scene object that passes the match is determined as a candidate scene object to be assigned to the division marker data.
  • the object attribute is the relevant attribute of the scene object.
  • the object attribute may include the maximum population density that the building is suitable for carrying, the minimum population density that the building needs to carry, and the The building type, the architectural style to which the building belongs, the building density for which the building group is suitable; and so on.
  • city modeling in Houdini can be combined with the virtual scene generation method described in this application to generate a virtual city.
  • the user can set the object properties of the building through Houdini. Specifically, as shown in 801 in FIG. 8 , the user sets the height of the building to be greater than 150 meters to make it belong to a high-rise building, as shown in 802 , Also, the minimum population density that the building needs to carry is set to 0.5, as shown at 803, and the building needs to be adjacent to the highway, as shown at 804.
  • the line attribute is a related attribute of the divided marked data.
  • the line attribute may include the population density that the road is suitable for carrying, the road width of the road, the road belongs to a commercial area or Residential lots, etc.
  • both the object attribute of the scene object and the line attribute of the division mark data are user-definable. Then, after the user defines the object attribute and the line attribute, the terminal can correspondingly determine the object attribute of the scene object and the line attribute of the divided mark data, and determine the scene object by performing attribute matching between the object attribute and the line attribute. Whether or not it is a candidate scene object assigned to this partition marker data.
  • Attribute matching can be performed in a variety of ways, for example, the line attributes of the segmented marker data can be analyzed to determine constraints or requirements set on scene objects that can be assigned to the segmented marker data. Further, by analyzing the object attribute value of the scene object, it can be determined whether the scene object complies with the restrictions or requirements set for the division mark data, and then it is determined whether the scene object is a candidate scene object that can be assigned to the division mark data. .
  • the scene object when the target virtual scene is a virtual city, the scene object may be a building, and the scene dividing network may be a city road network.
  • the urban road network shown in FIG. 9 can be generated in the initial virtual scene with reference to the aforementioned steps, wherein the urban road network includes at least one road.
  • the object attributes of the buildings can be matched with the line attributes of each road in the urban road network to determine candidate buildings allocated to each road.
  • Fig. 10 the result of attribute matching is visualized in this figure. It can be seen that, in Fig. 10, several buildings matching their attributes are placed on each road. Among them, white rectangles and gray rectangles represent different categories buildings, for example, white rectangles can represent residential buildings, and gray rectangles can represent medical buildings.
  • the matched scene objects are not the final target scene objects.
  • the buildings shown in Figure 10 are not the final target buildings assigned to the road, because there are still serious collisions, or Overlapping phenomena, such as collisions between buildings, and collisions between buildings and roads. Therefore, it is still necessary to further screen out the target buildings.
  • only candidate scene objects allocated to the division mark data can be determined through attribute matching, and further, by performing step 105 and subsequent steps, the target scene object can be screened out from the candidate scene objects.
  • the position association information is information describing how the candidate scene object and the division mark data are associated in position. It is worth noting that the position can be a position in a different dimension space, for example, a position in a two-dimensional plane; another example, a position in a three-dimensional space; another example, a position in a higher-dimensional space ;and many more.
  • the positional association between the candidate scene object and the division mark data may be in many cases.
  • the candidate scene object and the division mark data may overlap in position; for another example, the candidate scene object and the division mark data may be overlapped. A distance within a certain range is maintained at the position; for another example, the candidate scene object and the division mark data may be separated by a distance above a certain threshold in position; and so on.
  • the target scene object can be screened out from the candidate scene objects according to the position association information.
  • the step of "screening out the target scene object from the candidate scene object according to the positional association information between the candidate scene object and the dividing line" may include:
  • the target scene objects assigned to the divided line are screened out.
  • the geometric features of the candidate scene objects are features obtained by describing the candidate scene objects from the geometric dimension.
  • the geometric features may include features such as the position, occupied area or space of the candidate scene objects in the scene.
  • the position association information is information describing how the candidate scene object and the division mark data are related in position
  • the position association information between the candidate scene object and the division mark data can be determined based on the geometric features of the candidate scene object. For example, determine whether the candidate scene object and the division mark data overlap in position; for another example, determine whether the candidate scene object and the division mark data maintain a distance within a certain range in position; for another example, determine whether the candidate scene object and the division mark data are The locations are separated by a distance above a certain threshold; and so on.
  • the collision detection is a detection for judging whether two objects (or between two colliding bodies) overlap or overlap.
  • the collision detection may include collision detection between static colliders, collision detection between dynamic colliders, and collision detection between static colliders and dynamic colliders. Specifically, if there is no overlap or overlap between the colliding bodies, it can be considered that the detection has passed, otherwise, it can be considered that the detection has failed.
  • Collision detection can be implemented in various ways. For example, it can be achieved by generating rectangles or circles to wrap the collision body, and by detecting whether the rectangles or circles overlap or overlap; another example, it can be achieved by iterating Several rectangles or circles are generated, so that the collider is wrapped by the combined shapes of several rectangles or circles, and it is realized by detecting whether the combined shapes corresponding to different colliders overlap or overlap; and so on.
  • the candidate scene object when the target virtual scene is a virtual city to be generated, the candidate scene object may be a candidate building, and the division mark data may be a road in the virtual city.
  • the position correlation information between the candidate building and the road can be determined based on the geometric features of the candidate building, and according to the position correlation information, collision detection between the candidate building and the short circuit is performed to determine the candidate building. Whether a building has been placed on the road causing road blockage.
  • FIG. 11 shows the result after the candidate buildings in FIG. 10 and the roads in the urban road network are subjected to collision detection on the basis of FIG. 10 , and the result after the candidate buildings that fail to be detected are filtered out.
  • the target scene objects assigned to the division mark data can be further screened out from the candidate scene objects that have passed the collision detection.
  • the target scene object for labeling data for this partition which can include:
  • the candidate scene objects under the object category are screened, and the screened target scene objects are obtained as the target scene objects allocated to the divided lines.
  • the object density constraint parameter is the density constraint requirement of the scene object describing a specific object category.
  • the object category may be the category to which the building belongs, for example, the building category may include residential buildings, schools, prisons, office buildings, and so on.
  • Different classes of buildings themselves have different building density constraint rules, for example, the density constraint for residential buildings can be 0.7, which means the maximum density of residential buildings is 0.7; and so on.
  • the scene division network may include candidate scene objects under multiple object categories, and candidate scene objects under different object categories may have different object density constraint parameters, therefore, when determining the object to which the detected candidate scene objects belong After the category is selected, the candidate scene objects under the object category can be screened based on the object density constraint parameter, so as to avoid the situation that the objects are too dense.
  • FIG. 12 shows the results presented after screening the candidate buildings of different categories in FIG. 11 on the basis of FIG. 11 .
  • each type of building is set with its corresponding object density constraint parameter. If the current density of a candidate building under a certain category is inconsistent with its object density constraint parameter, for example, the current density is much larger than the object density constraint parameter, then based on the object density constraint parameter, the candidate buildings under this category are deleted to screen the candidate buildings under this category, and the filtered target buildings shown in FIG. 12 are obtained.
  • the step of "matching the target scene object with the division mark data to generate the target virtual scene” may include:
  • the priority level of each target scene object can be determined by sorting the target scene objects, so that the target scene object assigned to the divided marker data can be subsequently selected based on the priority level of each target scene object. Under the premise of restrictions, target scene objects with high priority levels can be assigned to the division mark data.
  • the target scene objects can be sorted based on their object attributes. For example, when the target scene objects are buildings, they can be sorted based on the floor space of the buildings. to sort the buildings. For another example, the target scene objects can be sorted based on their object categories; for example, when the target scene objects are buildings, it can be specified that residential buildings have higher priority than medical buildings.
  • the target scene objects that can be allocated to the division mark data can be determined by performing collision detection among the target scene objects based on the priority level.
  • the step "perform collision detection on the target scene object according to the priority level" may include:
  • the target scene objects corresponding to the object category are screened to obtain the screened target scene objects;
  • the target scene objects that pass the collision detection are determined from the screened target scene objects.
  • the collision detection of the target scene objects belonging to the same object category can aim to learn the collision situation of the target scene objects under the object category. Therefore, the detection result is to characterize the target under the object category. Information about the degree of collision or overlap between scene objects.
  • the target scene objects under the object category can be screened. For example, in the case of serious collision, more target scene objects can be screened out, so as to avoid causing the target to be assigned to the divided marked data. The problem of too many scene objects.
  • the scene object can be a building
  • the target building can be obtained by performing collision detection on each category of target buildings on the basis of the target buildings obtained after screening the candidate buildings shown in FIG.
  • the current collision situation of the target buildings of the category, and further based on the collision situation, the target buildings under the category are screened to avoid the problem of over-densified buildings. For example, if the current collision situation of a certain type of target building is serious, a larger screening ratio is set for the target building of this type.
  • the same screening ratio can be used to screen out the target buildings under each category.
  • Figure 13 shows that for the target buildings under each category in Figure 12, 50% of the target buildings are screened out. The effect of the remaining target buildings.
  • the target scene objects that pass the detection may be determined from the screened target scene objects based on the priority levels of the screened target scene objects. For example, referring to FIG. 13 , it can be seen that even after 50% of the target buildings in each category are screened out, there is still a collision problem between the remaining target buildings. That is to say, through the foregoing multiple screening methods, although the number of target scene objects can be quickly reduced to a smaller range, there may still be a collision problem between target scene objects. Therefore, the target scene object finally allocated to the division mark data can be selected from among the remaining target scene objects according to the priority levels of the remaining target scene objects.
  • the target scene object with the highest priority may be determined as the target scene object that passes the final detection.
  • the target buildings that is, in the area where the target buildings overlap, only the highest priority can be saved.
  • target buildings and filter out other target buildings.
  • the buildings represented by the white rectangles can be set to have higher priority than the buildings represented by the gray rectangles. Then, there can be between different types of buildings. When colliding with the problem, the buildings with high priority are kept, which solves the problem and produces the final effect shown in Figure 14.
  • this embodiment can greatly improve the efficiency of virtual scene generation.
  • this solution can generate a scene division network that matches the scene characteristics based on the scene feature information of the target virtual scene to be generated.
  • the target virtual scene generated by the scene division network has a high degree of simulation and credibility.
  • this scheme not only considers the attribute matching degree between the scene objects and the division mark data, but also considers the difference between the scene objects and the division mark data.
  • Position correlation information which can not only efficiently determine the position where the scene objects should be placed in the scene division network, but also effectively avoid the problem of false scenes caused by overlapping positions or object collisions during the scene generation process.
  • the virtual scene generation device is integrated into a server as an example for description.
  • the server may be a single server or a server cluster composed of multiple servers; the terminal may be a mobile phone, a tablet computer, or a laptop. Computers and other equipment.
  • the terminal sends scene feature information corresponding to the target virtual scene to be generated to the server.
  • the virtual scene generation method can be applied in game development to generate a virtual city.
  • the terminal may send scene feature information describing the virtual city to be generated, such as population density distribution data, to the server.
  • the server acquires the scene feature information.
  • the server generates a scene division network in the initial virtual scene based on the scene feature information, where the scene division network includes at least one division mark data, and the division mark data is used to divide the initial virtual scene.
  • the server may generate the urban road 1601 in Houdini software, for example, may generate a scene division network, ie, an urban road network, after setting the road style 16011 . It is worth noting that Houdini also provides the functions of manually generating the curve 1602 of the road and modifying the road 1603.
  • the server can also set corresponding configurations 1604 for roads in the urban road network, such as road attributes, and further, the server can also set road facilities 1605, such as trash cans, benches, and the like.
  • road facilities 1605 such as trash cans, benches, and the like.
  • the function of modifying the road facility 1606 may also be included.
  • the server generates a scene object set to be added to the scene division network, where the scene object set includes at least one scene object.
  • the server can modularly generate the building asset 1608 by combining the sub-modules, so as to generate a building asset 1608 to be added to the urban road network. collection of buildings. Further, the server can also manually place 1609 the generated building assets.
  • buildings can also be generated by manual modeling 1610 to obtain a building set.
  • the server can also set corresponding attributes to the generated buildings, so as to set placement criteria 1611 for them.
  • the server performs attribute matching between the scene object and the division mark data in the scene division network, and obtains a candidate scene object allocated to the division mark data.
  • the server may place buildings by matching the road configuration with the building placement criteria 1612 to obtain candidate buildings assigned to the road.
  • the server selects the target scene object from the candidate scene objects according to the positional association information between the candidate scene objects and the division mark data.
  • the server performs collision detection between the candidate building and the road according to the position correlation information between the candidate building and the road, and then performs a secondary collision between the detected target buildings. detection, so as to filter out the final target building.
  • the server matches the target scene object with the division mark data to generate the target virtual scene.
  • the virtual scene generation method described in the embodiments of the present application can be developed into a series of reusable digital asset file formats (Houdini Digital Asset, HDA) in Houdini to support the simulation of cities.
  • HDA Houdini Digital Asset
  • the game project referring to FIG. 16 , after setting the road configuration and the placement criteria of the building assets, the asset 1612 can be automatically placed according to the configuration and criteria, so as to realize the execution of steps 205 , 206 and 207 . Further, some areas can also be modified 1613.
  • Figure 17 is the node network used by HDA connection, including subnet mask node, road growth node, subnet road attribute node, subnet building node, building placement node, road instance model export node, building Instance model export node.
  • the virtual city generated based on the node network is shown in Figure 18, and further, the effect presented in the Unreal Engine game engine is shown as 2001 in Figure 19.
  • the server generates scene rendering data of the target virtual scene, and sends the scene rendering data to the terminal.
  • the server may save the scene rendering data in the form of a cache file 1614, so that the terminal can display the target virtual scene generated by the server based on the cache file.
  • the terminal receives the scene rendering data sent by the server, and displays the generated target virtual scene based on the scene rendering data.
  • the Unreal software engine 1616 or the Unity3D software 1616 may also run.
  • the terminal can display the virtual city rendered and generated by the server based on the scene rendering data as shown in FIG. 18 or FIG. 19 .
  • the model of the building module can also be optimized for maximum operating efficiency in the game engine, and the output of the building module can also use the engine's unique technology for a wide range of scenes to increase the operating efficiency, and the building placement algorithm can also be based on the engine.
  • an embodiment of the present application further provides an apparatus for generating a virtual scene, wherein the apparatus for generating a virtual scene may be integrated in a server or a terminal.
  • the server can be a single server or a server cluster composed of multiple servers;
  • the terminal can be a mobile phone, a tablet computer, a notebook computer and other devices.
  • the virtual scene generating apparatus may include an information acquiring unit 301, a network generating unit 302, a set generating unit 303, an attribute matching unit 304, a target screening unit 305 and a target assigning unit 306, as follows:
  • An information acquisition unit 301 configured to acquire scene feature information corresponding to the target virtual scene to be generated
  • a network generation unit 302 configured to generate a scene division network in an initial virtual scene based on the scene feature information, wherein the scene division network includes at least one division mark data, and the division mark data is used to divide the initial virtual scene Scenes;
  • a set generating unit 303 configured to generate a set of scene objects to be added to the scene division network, wherein the set of scene objects includes at least one scene object;
  • an attribute matching unit 304 configured to perform attribute matching between the scene object and the division mark data to obtain candidate scene objects allocated to the division mark data
  • a target screening unit 305 configured to screen out a target scene object from the candidate scene objects according to the positional association information between the candidate scene objects and the division mark data;
  • the target allocation unit 306 is configured to match the target scene object with the division mark data to generate a target virtual scene.
  • the divided marking data is divided lines, and the network generation unit 302 may include:
  • a basic generation subunit 3021 configured to generate a basic division network in the initial virtual scene based on the scene feature information, wherein the basic division network includes at least one division line to be adjusted;
  • a local adjustment subunit 3022 configured to adjust the divided line to be adjusted based on the line intersection information of the divided line to be adjusted in the basic division network to obtain an adjusted divided line;
  • the network determination subunit 3023 is configured to determine the scenario division network according to the adjusted division line.
  • the basic generation subunit 3021 can be used to:
  • the basic division network is generated in the initial virtual scene.
  • the basic generation subunit 3021 may be specifically used for:
  • the basic division network that obeys the line distribution pattern is generated, wherein the basic division network includes at least one division line to be corrected; according to the tensor information, the The divided lines are geometrically corrected to obtain the corrected divided lines, which are used as divided lines to be adjusted in the basic division network.
  • the local adjustment subunit 3022 can be used to:
  • a line constraint rule is designed, and a target divided line to be adjusted is determined; the target divided line is adjusted according to the line constraint rule to obtain an adjusted divided line.
  • the target screening unit 305 may include:
  • the association determination subunit 3051 is used to determine the position association information between the candidate scene object and the divided line based on the geometric feature of the candidate scene object;
  • a first detection subunit 3052 configured to perform collision detection on the candidate scene object and the divided line according to the position association information
  • the target screening subunit 3053 is configured to screen out the target scene objects allocated to the divided lines from the candidate scene objects that have passed the collision detection.
  • the target screening subunit 3053 can be used to:
  • the object category to which the candidate scene object that has passed the collision detection belongs, wherein the object category has a corresponding object density constraint parameter; screen the candidate scene objects under the object category according to the object density constraint parameter, The filtered target scene objects are obtained as the target scene objects allocated to the divided lines.
  • the target allocation unit 306 may include:
  • the object sorting subunit 3061 can be used to sort the target scene objects, and determine the priority level of each target scene object;
  • the second detection subunit 3062 can be configured to perform collision detection on each target scene object according to the priority level
  • the target allocation subunit 3063 may be configured to match the target scene object passing the collision detection with the division mark data to generate a target virtual scene.
  • the second detection subunit 3062 can be used to:
  • Collision detection is performed on the target scene objects belonging to the same object category; based on the detection results, the target scene objects corresponding to the object categories are screened to obtain the screened target scene objects; based on the screened target scene objects The priority level, the target scene object that passes the collision detection is determined from the screened target scene objects.
  • the set generating unit 303 may include:
  • the module acquisition subunit 3031 can be used to acquire the submodule of the scene object to be generated
  • the rule determination subunit 3032 can be used to determine the combination rule corresponding to the submodule
  • the module combination subunit 3033 can be used to perform module combination on the submodules based on the module parameters of the submodules and the combination rule to obtain the combined scene object;
  • the set generating subunit 3034 may be configured to generate the scene object set according to the combined scene objects.
  • the attribute matching unit 304 may include:
  • An attribute determination subunit 3041 which can be used to determine the object attribute of the scene object and the line attribute of the division mark data
  • an attribute matching subunit 3042 which can be used to perform attribute matching on the object attribute and the line attribute;
  • the candidate determination subunit 3043 may be configured to determine the scene object that has passed the match as the candidate scene object allocated to the division mark data.
  • an embodiment of the present application also provides an electronic device, which may be a server or a terminal, as shown in FIG. 26 , which shows a schematic structural diagram of the electronic device involved in the embodiment of the present application.
  • the electronic device includes a memory 401 with one or more computer-readable storage media, an input unit 402, a display unit 403, a Wireless Fidelity (WiFi, Wireless Fidelity) module 404, and a processor 405 including one or more processing cores , and components such as the power supply 406 .
  • a memory 401 with one or more computer-readable storage media
  • an input unit 402 a display unit 403, a Wireless Fidelity (WiFi, Wireless Fidelity) module 404, and a processor 405 including one or more processing cores , and components such as the power supply 406 .
  • WiFi Wireless Fidelity
  • the memory 401 can be used to store software programs and modules, and the processor 405 executes various functional applications and data processing by running the software programs and modules stored in the memory 401 .
  • Memory 401 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 401 may also include a memory controller to provide access to the memory 401 by the processor 405 and the input unit 402 .
  • the input unit 402 may be used to receive input numerical or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • the display unit 403 can be used to display information input by the user or information provided to the user and various graphical user interfaces of the electronic device, which can be composed of graphics, text, icons, videos, and any combination thereof.
  • the processor 405 is the control center of the electronic device, using various interfaces and lines to connect various parts of the entire mobile phone, by running or executing the software programs and/or modules stored in the memory 401, and calling the data stored in the memory 401, Perform various functions of electronic equipment and process data, so as to conduct overall monitoring of mobile phones.
  • the electronic device also includes a power supply 406 (such as a battery) for supplying power to various components.
  • a power supply 406 (such as a battery) for supplying power to various components.
  • the power supply can be logically connected to the processor 405 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
  • the embodiments of the present application provide a computer-readable storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute any of the virtual scene generation methods provided by the embodiments of the present application. step.
  • the storage medium may include: a read-only memory (ROM, Read Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, and the like.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods provided in the various optional implementations of the above-mentioned virtual scene generation aspect.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例公开了一种虚拟场景生成方法、装置、设备和存储介质;本申请实施例可以获取待生成的目标虚拟场景对应的场景特征信息;基于所述场景特征信息,在初始虚拟场景中生成场景划分网络,其中,所述场景划分网络包括至少一个划分标记数据,所述划分标记数据用于划分所述初始虚拟场景;生成待添加至所述场景划分网络中的场景对象集合,其中,所述场景对象集合包括至少一个场景对象;将所述场景对象与所述划分标记数据进行属性匹配,得到分配给所述划分标记数据的候选场景对象;根据所述候选场景对象与所述划分标记数据之间的位置关联信息,从所述候选场景对象中筛选出目标场景对象;将所述目标场景对象与所述划分标记数据进行匹配,生成目标虚拟场景。

Description

一种虚拟场景生成方法、装置、设备和存储介质
本申请要求于2021年2月7日提交中国专利局、申请号为202110178014.2、申请名称为“一种虚拟场景生成方法、装置、设备和存储介质”的中国专利申请的优先权。
技术领域
本申请涉及计算机技术领域,具体涉及一种虚拟场景生成方法、装置、设备和存储介质。
发明背景
随着信息技术的发展,我们可以在计算机设备上对生活中的场景进行仿真以生成仿真的虚拟场景,也可以重新创造或设计新的虚拟场景。例如,可以生成仿真或创造的虚拟社区、虚拟村落、虚拟城市等;又如,可以生成仿真或创造的虚拟系统、虚拟星系、虚拟宇宙等;等等。而生成的虚拟场景可以有多种用途,如可以用于影视特效、游戏场景构建,等等。
在对相关技术的研究和实践过程中,本申请的发明人发现,目前虚拟场景的生成方法均较为低效,无论是通过开发人员手工设计、或者是基于真实的场景信息,如航拍图像来进一步生成,都需要付出较大的精力与成本。
发明内容
本申请实施例提供一种虚拟场景生成方法、装置、设备和存储介质,可以提高虚拟场景的生成效率。
本申请实施例提供一种虚拟场景生成方法,由电子设备执行,包括:
获取待生成的目标虚拟场景对应的场景特征信息;
基于所述场景特征信息,在初始虚拟场景中生成场景划分网络,其中,所述场景划分网络包括至少一个划分标记数据,所述划分标记数据用于划分所述初始虚拟场景;
生成待添加至所述场景划分网络中的场景对象集合,其中,所述场景对象集合包括至少一个场景对象;
将所述场景对象与所述划分标记数据进行属性匹配,得到分配给所述划分标记数据的候选场景对象;
根据所述候选场景对象与所述划分标记数据之间的位置关联信息,从所述候选场景对象中筛选出目标场景对象;
将所述目标场景对象与所述划分标记数据进行匹配,生成目标虚拟场景。
相应的,本申请实施例还提供一种虚拟场景生成装置,包括:
信息获取单元,用于获取待生成的目标虚拟场景对应的场景特征信息;
网络生成单元,用于基于所述场景特征信息,在初始虚拟场景中生成场景划分 网络,其中,所述场景划分网络包括至少一个划分标记数据,所述划分标记数据用于划分所述初始虚拟场景;
集合生成单元,用于生成待添加至所述场景划分网络中的场景对象集合,其中,所述场景对象集合包括至少一个场景对象;
属性匹配单元,用于将所述场景对象与所述划分标记数据进行属性匹配,得到分配给所述划分标记数据的候选场景对象;
目标筛选单元,用于根据所述候选场景对象与所述划分标记数据之间的位置关联信息,从所述候选场景对象中筛选出目标场景对象;
目标分配单元,用于将所述目标场景对象与所述划分标记数据进行匹配,生成目标虚拟场景。
相应的,本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现如本申请实施例所示的虚拟场景生成方法的步骤。
相应的,本申请实施例还提供一种电子设备,包括存储器,处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现如本申请实施例所示的虚拟场景生成方法的步骤。
相应的,本申请实施例还提供一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述虚拟场景生成方法。
附图简要说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的虚拟场景生成方法的场景示意图;
图2是本申请实施例提供的虚拟场景生成方法的流程图;
图3是本申请实施例提供的线路约束规则的应用示意图;
图4是本申请实施例提供的线路约束规则的另一应用示意图;
图5是本申请实施例提供的线路约束规则的另一应用示意图;
图6是本申请实施例提供的各个建筑子模块的示意图;
图7是本申请实施例提供的组合后的建筑物的示意图;
图8是本申请实施例提供的通过Houdini设置建筑物属性的界面示意图;
图9是本申请实施例提供的城市道路网络的示意图;
图10是本申请实施例提供的属性匹配后的道路网络示意图;
图11是本申请实施例提供的筛除了检测未通过的候选建筑物之后的道路网络示意图;
图12是本申请实施例提供的按照对象密度约束参数筛选目标建筑物后的道路网络示意图;
图13是本申请实施例提供的对图12中各类别下的目标建筑物均筛除50%后的道路网络示意图;
图14是本申请实施例提供的根据优先级筛选目标建筑物后的道路网络示意图;
图15是本申请实施例提供的虚拟场景生成方法的另一流程示意图;
图16是本申请实施例提供的在Houdini软件中生成城市道路的流程示意图;
图17是本申请实施例提供的Houdini内连接起来使用的节点网络的示意图;
图18是本申请实施例提供的虚拟城市的示意图;
图19是本申请实施例提供的在Unreal Engine游戏引擎内呈现的虚拟城市的示意图;
图20是本申请实施例提供的虚拟场景生成装置的结构示意图;
图21是本申请实施例提供的虚拟场景生成装置的另一结构示意图;
图22是本申请实施例提供的虚拟场景生成装置的另一结构示意图;
图23是本申请实施例提供的虚拟场景生成装置的另一结构示意图;
图24是本申请实施例提供的虚拟场景生成装置的另一结构示意图;
图25是本申请实施例提供的虚拟场景生成装置的另一结构示意图;
图26是本申请实施例提供的电子设备的结构示意图。
实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供一种虚拟场景生成方法、装置、设备和存储介质。具体地,本申请实施例提供适用于电子设备的虚拟场景生成装置。其中,该电子设备可以为终端或服务器等设备,该终端可以为手机、平板电脑、笔记本电脑等设备。该服务器可以是单台服务器,也可以是由多个服务器组成的服务器集群。
本申请实施例将以虚拟场景生成方法由终端执行为例来进行介绍。
参考图1,本申请实施例提供了虚拟场景生成系统,包括有终端10,其中,终端10可以用于生成目标虚拟场景,例如,可以用于生成仿真或创造的虚拟社区、虚拟村落、虚拟城市;又如,可以用于生成仿真或设计的虚拟系统、虚拟星系、虚拟宇宙,等等。
具体地,终端10可以获取待生成的目标虚拟场景对应的场景特征信息,并基于该场景特征信息,在初始虚拟场景中生成场景划分网络,其中,该初始虚拟场景可以在生成或搭建目标虚拟场景的过程中,作为承载划分标记数据与场景对象的容器,以使得可以通过在初始虚拟场景中添加划分标记数据与场景对象来辅助生成目标虚拟场景。其次,生成的场景划分网络中可以包括至少一个 划分标记数据,该划分标记数据可以用于划分初始虚拟场景。
终端10可以生成待添加至该场景划分网络中的场景对象集合,其中,该场景对象集合包括至少一个场景对象。例如,当目标虚拟场景为仿真或设计的虚拟城市时,场景对象可以为建筑物、植被等。
进一步地,终端10可以将场景对象与场景划分网络中的划分标记数据进行属性匹配,从而得到分配给该划分标记数据的候选场景对象。并根据该候选场景对象与该划分标记数据之间的位置关联信息,从分配给该划分标记数据的候选场景对象中筛选出目标场景对象。以使得终端10可以通过将该目标场景对象与该划分标记数据进行匹配,来将目标场景对象添加至场景划分网络中,生成目标虚拟场景。
在另一实施例中,虚拟场景生成方法可以由终端与服务器共同执行。
参考图1,虚拟场景生成系统包括有终端10与服务器20,其中,终端10可以向服务器20发送待生成的目标虚拟场景对应的场景特征信息。相应地,服务器20可以获取待生成的目标虚拟场景的场景特征信息;基于该场景特征信息,在初始虚拟场景中生成场景划分网络,其中,该场景划分网络可以包括至少一个划分标记数据,该划分标记数据可以用于划分初始虚拟场景;生成待添加至场景划分网络中的场景对象集合,其中,该场景对象集合可以包括至少一个场景对象;将场景对象与场景划分网络中的划分标记数据进行属性匹配,得到分配给该划分标记数据的候选场景对象;根据该候选场景对象与该划分标记数据之间的位置关联信息,从该候选场景对象中筛选出目标场景对象;将目标场景对象与该划分标记数据进行匹配,以将该目标场景对象添加至场景划分网络中,生成目标虚拟场景。
可选地,服务器20可以将生成的目标虚拟场景的场景渲染数据发送给终端10,以使得终端10可以基于该场景渲染数据,展示该目标虚拟场景的相关画面。
以下分别进行详细说明。需说明的是,以下实施例的描述顺序不作为对实施例优选顺序的限定。
随着人工智能技术研究和进步,人工智能技术在多个领域展开研究和应用,例如常见的智能家居、智能穿戴设备、虚拟助理、智能音箱、智能营销、无人驾驶、自动驾驶、无人机、机器人、智能医疗、智能客服等,相信随着技术的发展,人工智能技术将在更多的领域得到应用,并发挥越来越重要的价值。
本申请实施例提供的一种虚拟场景生成方法,涉及人工智能的计算机视觉等技术,并且,该方法由电子设备执行,例如,由图1中的终端10或服务器20执行,也可以由终端10和服务器20共同执行。
本申请实施例以虚拟场景生成方法由终端执行为例来进行说明,具体的,由集成在终端中的虚拟场景生成装置来执行,如图2所述,该虚拟场景生成方法的具体流程可以如下:
101、获取待生成的目标虚拟场景对应的场景特征信息。
其中,实际生活中的场景为指在一定的时间、空间内发生的一定任务行动或因人物关系所构成的具体生活画面,相对而言,是角色的行动和生活事件的阶段性横向展示,用于表现剧情内容的具体发展过程。
本申请中涉及到的待生成的目标虚拟场景,可以为待建模的虚拟场景,具体地,可以为利用计算机视觉技术对实际生活中的场景进行仿真得到的仿真虚拟场景,也可以为利用计算机视觉技术重新创造或设计建模后得到的新的虚拟场景。例如,目标虚拟场景可以为仿真或创造的虚拟社区、虚拟村落、虚拟城市等;又如,可以为仿真或创造的虚拟系统、虚拟星系、虚拟宇宙等;等等。生成的目标虚拟场景可以有多种用途,如,可以用于影视特效、游戏场景构建,等等。
本申请实施例中,可以以目标虚拟场景为虚拟城市为例,对虚拟场景生成方法进行介绍,其中,待生成的目标虚拟场景可以为仿真的虚拟城市,或者创造的虚拟城市,或者结合现实中的真实数据辅助生成的虚拟城市,等等。
其中,场景特征信息用于描述目标虚拟场景所具有的特点,可以为描述目标虚拟场景的多维场景特征的相关信息,例如,地理维度、人口密度维度、功能区域维度、建筑物高度维度。
例如,场景特征信息在地理维度上的特点,包括海拔分布、土地分布、水资源分布、植被分布,等等;又如,场景特征信息还可以描述目标虚拟场景在社会统计意义上的特点,包括人口密度分布、功能区域分布(以目标虚拟场景为虚拟城市为例,功能区域可以包括住宅区、商业区、混合区,等等)、高度分布(以目标虚拟场景为虚拟城市为例,高度分布可以为城市中建筑物的高度分布),等等。
场景特征信息的数据格式可以有多种,例如,可以为数据表、图像、音频等多样的数据格式。
在一实施例中,当场景特征信息为虚拟城市的人口密度分布信息时,获取待生成的目标虚拟场景对应的场景特征信息,可以通过获取虚拟城市的人口密度分布图来实现。
在实际应用中,获取待生成的目标虚拟场景对应的场景特征信息的方式可以有多种,例如,可以从数据库中提取;又如,可以向服务器或网络请求;又如,可以通过数据采集设备,如相机或摄像机,即时采集或搜索;又如,可以通过用户输入,等等。
102、基于场景特征信息,在初始虚拟场景中生成场景划分网络,其中,该场景划分网络包括至少一个划分标记数据,该划分标记数据用于划分初始虚拟场景。
其中,初始虚拟场景可以作为在生成或搭建目标虚拟场景的过程中的一个基础容器。例如,初始虚拟场景可以作为待生成的划分标记数据与场景对象的容器,从而可以通过在初始虚拟场景中添加划分标记数据与场景对象来辅助生 成目标虚拟场景。
例如,初始虚拟场景可以为坐标系,如,三维坐标系或二维坐标系等,该坐标系可以为空白坐标系,也可以包括有已有场景对象的非空坐标系。在一实施例中,可以在图形软件中进行目标虚拟场景的构建,则初始虚拟场景可以为图形软件中生成目标虚拟场景时,图形软件的初始化状态。例如,可以在三维计算机图形软件Houdini中构建目标虚拟场景,初始虚拟场景可以为Houdini可交互界面中的三维坐标系,并且,该三维坐标系可以为空白三维坐标系,也即在Houdini中从空白开始创建目标虚拟场景。该三维坐标系也可以为非空三维坐标系,也即通过Houdini在已有虚拟场景中继续构建目标虚拟场景。
其中,划分标记数据为在初始虚拟场景中起到划分标记的数据,包括可视化的划分线路以及非可视化的标记数据。例如,可视化的划分线路可以为线段、直线、虚线、曲线等,这种情况下,划分标记数据可以作为最后生成的目标虚拟场景的一部分呈现出来;又如,非可视化的标记数据,可以为坐标数据、长度数据等,此时,划分标记数据可以不作为最后生成的目标虚拟场景中的一部分呈现出来,仅是作为辅助生成目标虚拟场景所需的数据。
在一实施例中,目标虚拟场景为虚拟城市时,可视化的划分线路可以为虚拟城市中的道路,通过道路,可以将虚拟城市划分为不同的区域,在该实施例中,划分标记数据可以为作为最后生成的虚拟城市的一部分呈现出来。
在另一实施例中,目标虚拟场景为虚拟宇宙时,非可视化的标记数据可以为将虚拟宇宙划分成不同部分所需的标记数据,如,不同星系或不同空间区域等。在该实施例中,划分标记数据可以不作为虚拟宇宙中的一部分来呈现虚拟宇宙,仅是作为辅助生成虚拟宇宙所需的标记数据。
其中,场景划分网络为由划分标记数据构成的网络。可选的,场景划分网络中还可以包括有由各划分标记数据交汇或相交所构成的节点,或者独立存在的节点,本申请实施例对此不作限制。
由于场景特征信息中包括有描述目标虚拟场景的场景特征的相关信息,因此,可以基于该场景特征信息,在初始虚拟场景中生成场景划分网络。例如,可以通过在初始虚拟场景中生成基础划分网络,并进一步地对该基础划分网络中的划分线路进行调整,从而生成场景划分网络。具体地,步骤“基于场景特征信息,在初始虚拟场景中生成场景划分网络”,可以包括:
基于场景特征信息,在初始虚拟场景中生成基础划分网络,其中,该基础划分网络包括至少一条待调整的划分线路;
基于待调整的划分线路在基础划分网络中的线路交汇信息,对该待调整的划分线路进行调整,得到调整后的划分线路;
根据调整后的划分线路,确定场景划分网络。
其中,基础划分网络为生成场景划分网络所需的基础网络,具体地,基础划分网络可以为由至少一条待调整的划分线路构成的网络,以使得可以通过对 待调整的划分线路进行调整从而得到场景划分网络。
例如,目标虚拟场景为虚拟城市时,在生成城市道路网络的过程中,相应地基础划分网络可以为虚拟城市的基础道路网络。进一步地,可以利用线路分布模式来指导生成虚拟城市的基础道路网络,以使得生成的基础道路网络遵循该道路分布模式。
因此,可以首先生成基础道路网络,该基础道路网络与最终的城市道路网络相比,在大尺度上比较接近,仍需进行后续调整。譬如,基础道路网络与最终的城市道路网络相比,虽具有相似的道路分布,但在各段道路的细节上仍需调整。
其中,对划分线路进行调整,可以指在一定范围的局部限制条件下,接受、拒绝和修改划分线路。这些调整都是为了纠正小尺度上的错误,以提高基础划分网络的局部一致性,以便得到最终的场景划分网络。
基于场景特征信息,在初始虚拟场景中生成基础划分网络的方式可以有多种,例如,可以结合线路分布模式与张量场来实现,具体地,步骤“基于场景特征信息,在初始虚拟场景中生成基础划分网络”,可以包括:
确定生成基础划分网络所需的线路分布模式;
对场景特征信息进行信息转换,得到该场景特征信息对应的张量信息;
基于该线路分布模式与该张量信息,在初始虚拟场景中生成基础划分网络。
其中,线路分布模式可以为描述划分线路在目标虚拟场景中的分布所遵循的模式,例如,目标虚拟场景为虚拟城市时,则线路分布模式可以为描述道路在虚拟城市中分布所遵循的模式,譬如,自然模式、网格模式、辐射模式、海拔导向模式等。
举例来说,在自然模式中,道路的分布可以与虚拟城市的人口密度分布一致,也即城市道路网络的分布与人口密度的自然增长一致,例如,自然模式常见于城市的老街区中;
在网格模式中,道路的分布可以遵循给定的全局角度或局部角度、以及单个区域块的最大长度和最大宽度,例如,当道路的分布遵循网格模式时,可以在虚拟城市中生成大量的矩形街区;
在辐射模式中,道路可以沿着中心的径向生成,从而使得生成的道路网络类似于辐射状,例如,辐射模式常见于存在一个城市中心中,其中,道路沿着该城市中心的径向生成;
在海拔导向模式中,道路的生成以虚拟城市中各地的海拔为导向,例如,海拔导向模式常见于地面高程差异较大的地区;等等。
值得注意的是,生成基础划分网络所需的线路分布模式,可以包括多种模式,在应用时可以在初始虚拟场景中不同的位置对各线路分布模式进行权重赋值,这样,基于不同的权重,多个线路分布模式被不同程度地考虑进去,这样的话,可以在初始虚拟场景中混合多种线路分布模式来生成基础划分网络,从 而使得最后生成的场景划分网络更贴近实际城市的特征。
线路分布模式的确定方式可以有多种,例如,可以通过用户指定确定,又如,可以通过系统配置;等等。在一实施例中,当工程师应用本申请所述的虚拟场景生成方法来生成虚拟城市时,工程师可以通过分析业务需求选择所需的线路分布模式。在本实施例中,可以对目标虚拟场景的场景特征信息进行信息转换,以得到该场景特征信息对应的张量信息。其中,张量信息的表现形式可以有多种,例如,可以以张量场的形式呈现,具体地,张量场是标量场或矢量场的泛化,张量场中可以分别给每个空间点分配一个标量或矢量。
例如,目标虚拟场景可以为虚拟城市,场景特征信息可以为该虚拟城市的人口密度分布数据,那么,可以对该人口密度分布数据进行转换,得到对应的张量场。
张量的数据结构可以有多种,例如,标量是0阶张量,矢量是一阶张量,二维矩阵是二阶张量,立体矩阵为三阶张量等。因此,对场景特征信息进行信息转换以得到张量信息的方式也可以有多种。
例如,张量的数据结构可以为二维矩阵,那么,可以通过将场景特征信息中的数据转换为对应的二维矩阵,来得到该场景特征信息对应的张量信息。举例来说,场景特征信息可以为虚拟城市的人口密度分布数据,可以将其中的各数据转换为对应的二维矩阵,从而得到人口密度分布数据对应的张量场。
在本实施例中,在确定生成基础划分网络所需的线路分布模式、以及目标虚拟场景的场景特征信息对应的张量信息后,即可基于该线路分布模式与该张量信息,进一步地生成基础划分网络,具体地,步骤“基于该线路分布模式与该张量信息,在初始虚拟场景中生成基础划分网络”,可以包括:
在初始虚拟场景中,生成服从线路分布模式的基础划分网络,其中,该基础划分网络包括至少一条待校正的划分线路;
根据张量信息,对待校正的划分线路进行几何校正,得到校正后的划分线路,作为基础划分网络中待调整的划分线路。
其中,生成服从线路分布模式的基础划分网络的方式可以有多种,例如,可以设计接收线路分布模式为参数的基础划分网络生成模块,并通过该生成模块来生成服从线路分布模式的基础划分网络。
作为示例,可以参考以下伪代码理解步骤“在初始虚拟场景中生成服从线路分布模式的基础划分网络”:
add new road according to pattern;
其中,“pattern”参数即表示线路分布模式,“new road”表示基础划分网络中的一条待校正的划分线路。
此外,该生成模块除了可以接受线路分布模式为参数以外,还可以接受其他辅助参数,并结合线路分布模式来生成基础划分网络,例如,辅助参数可以包括分支概率等。
在一实施例中,该生成模块可以基于L系统的思想来设计。具体地,L系统(Lindenmayer System,L System)是一种字符串重写机制,被广泛应用于植物生长过程的研究和建模。在该实施例中,可以基于L系统的思想来设计基础划分网络的生成模块,并接受生成基础划分网络所需的线路分布模式为参数,以在初始虚拟场景中生成服从线路分布模式的基础划分网络。
在该实施例中,该生成模块根据接受的参数,大范围地生成划分线路,例如,根据人口分布密度数据和分支概率,由各处向城市中心生成道路,然后在对生成的道路进行调整。
值得注意的是,仅是基于线路分布模式生成的基础划分网络,并未将目标虚拟场景的场景特征信息考虑在内,因此,该基础划分网络中的划分线路很可能与目标虚拟场景的场景特征不一致,也就是说,这样生成的基础划分网络中包括至少一条待校正的划分标记数据。
例如,目标虚拟场景可以为虚拟城市,生成基础划分网络所需的线路分布模式可以为网格模式,并且,描述目标虚拟场景的场景特征信息可以为人口密度分布数据。那么,在初始虚拟场景中生成服从网格模式的基础划分网络后,该基础划分网络的构成可以包括有若干个矩形街区,但此时划分线路的分布可能与虚拟城市的人口密度分布不一致。其中,不一致的情况可以有多种,例如,并不是在人口密度大的区域生成更多的划分线路,而是生成平均分布的划分线路;又如,生成的划分线路的角度并不与人口密度分布中密度变化的方向一致;等等。
因此,需要进一步地对仅基于线路分布模式生成的基础划分网络中的划分线路进行校正,例如,几何校正。
其中,几何校正为从几何维度对划分线路进行的校正,例如,调整划分线路的几何特征,譬如,划分线路的角度、长度、位置、宽度等。
在本实施例中,基于场景特征信息对基础划分网络中的划分线路进行几何校正,可以根据该场景特征信息对应的张量信息来实现。举例来说,目标虚拟场景可以为虚拟城市,场景特征信息可以为人口密度分布数据,那么,可以根据人口密度分布数据对应的张量信息,来调整基础划分网络中划分线路的角度,从而实现对划分线路进行几何校正,以得到既符合线路分布模式、又符合城市人口密度分布的划分线路。
作为示例,可以参考以下伪代码理解步骤“根据张量信息,对待校正的划分线路进行几何校正,得到校正后的划分线路”:
rotate road direction with angle deviation;
if rotated road population>=straight line population
use rotated road;
if new road population<threshold
rotate road according to population gradient field;
其中,“road”表示待校正的划分线路,核心思想为“rotate road direction with angle deviation”,即通过比较划分线路与人口密度分布数据(population)对应的张量信息之间的角度偏差“angle deviation”,来调整划分线路的角度“road direction”。
此外,根据张量信息对服从线路分布模式的基础划分网络中的划分线路进行几何校正后,还可以结合其他辅助参数,对校正后的划分线路进一步作调整,例如,辅助参数可以包括预设的人口阈值等,调整可以包括新增或删除划分线路等。作为示例,可以参考一下伪代码进行理解:
if new road population>branch threshold
add a branch road;
if new road population>threshold
add a road;
其中,“new road”即为几何校正后的划分线路,“new road population”则表示该划分线路对应的人口密度,“branch threshold”与“threshold”均指代预设的人口阈值,因此,可以通过将“new road population”与“branch threshold”作比较、以及将“new road population”与“threshold”作比较,以确定是否要新增分支道路或主道路。
进一步地,在对基础划分网络中的划分线路进行几何校正,得到校正后的划分线路后,即可相应地生成初始虚拟场景中最终的基础划分网络。
在本实施例中,在初始虚拟场景中生成基础划分网络后,即可以进一步地对该基础划分网络中的划分线路进行调整,以便后续生成最终的场景划分网络。例如,可以基于划分线路在基础划分网络中的线路交汇信息,对该划分线路进行调整,得到调整后的划分线路。
其中,线路交汇信息可以为描述基础划分网络中划分线路之间的交汇情况的相关信息。例如,划分线路的线路交汇信息可以为,该划分线路是否与其他划分线路相交,或者该划分线路是否在一定范围内靠近线路的交叉口,或者该划分线路是否在一定范围内靠近其他的划分线路而未相交,等等。
在本实施例中,基于划分线路在基础划分网络中的线路交汇信息,对该划分线路进行局部调整的方式可以有多种,例如,可以针对不同的线路交汇情况设计对应的线路约束规则,并基于划分线路的线路交汇信息,通过遵循该线路约束规则来对该划分线路进行局部调整。具体地,步骤“基于待调整的划分线路在基础划分网络中的线路交汇信息,对该待调整的划分线路进行调整,得到调整后的划分线路”,可以包括:
基于线路交汇信息,设计线路约束规则,并确定待调整的目标划分线路;
遵循线路约束规则,对目标划分线路进行调整,得到调整后的划分线路。
其中,线路约束规则为对划分线路进行约束的相关规则,例如,可以对划分线路的几何特征,如,角度、长度、位置等进行约束的相关规则;又如,可以为基于划分线路与其他划分线路之间的相交情况,来生成线路交汇点的约束 规则;等等。
在确定目标划分线路以后,可以通过遵循线路约束规则来对目标划分线路进行调整,以得到调整后的划分线路。
在一实施例中,线路约束规则可以为基于划分线路与其他划分线路之间的相交情况,来生成线路交汇点的约束规则。举例来说,当目标虚拟场景为虚拟城市时,划分线路即为虚拟城市中的道路,则生成的线路交汇点则可以对应地为虚拟城市中的道路交叉口。
例如,该线路约束规则可以为若检测到两条划分线路相交,则产生一个线路交汇点。作为示例,参考图3,可以看到在左图中检测到两条道路301和302相交,则可以对道路302进行调整,如右图所示,生成线路交汇点303,并且将道路302的长度缩短至3线路交汇点03。
又如,该线路约束规则可以为若检测到划分线路的终点在一定范围内靠近现有的线路交汇点,则延长该划分线路以到达该线路交汇点。具体地,可以参考图4,可以看到在左图中检测到一道路401的终点402,在虚线圆圈所示的范围内,靠近一个现有的线路交汇点403,则可以对其中的道路401进行调整,如右图所示,通过延长该道路401以使该道路401到达该线路交汇点403。
又如,该线路约束规则可以为若检测到划分线路在一定范围内靠近其他划分线路,则延长该划分线路至其他划分线路以生成一个线路交汇点。具体地,可以参考图5,可以看到在左图中检测到一道路501在虚线圆圈所示的范围内靠近其他道路502,则可以对该道路501进行局部调整,如右图所示,通过延长该道路501,使道路501与道路502相交,生成一个线路交汇点503。
值得注意的是,线路约束规则可以基于业务需求进行设置,上述仅为线路约束规则的一些举例,并不是所有情况。
在本实施例中,在对基础划分网络中的划分线路进行调整后,可以得到调整后的划分线路,因此,也确定了由该调整后划分线路组成的场景划分网络。
103、生成待添加至场景划分网络中的场景对象集合,其中,该场景对象集合包括至少一个场景对象。
其中,场景对象可以为目标虚拟场景中的内容对象,举例来说,当目标虚拟场景为虚拟城市时,内容对象包括建筑物、人物角色、动物角色、植被、水资源,等等;当目标虚拟场景为虚拟宇宙时,内容对象包括天体、探测器、卫星,等等。
其中,场景对象集合可以为包括目标虚拟场景中的场景对象的集合。例如,场景对象集合可以为包括目标虚拟场景中的所有场景对象的集合;又如,场景对象集合可以为包括目标虚拟场景中某一对象类别下的场景对象的集合,如,建筑物集合等。
生成场景对象集合的方式可以有多种,例如,可以通过组装待生成的场景对象的子模块,来生成该场景对象。具体地,步骤“生成待添加至场景划分网 络中的场景对象集合”,可以包括:
获取待生成的场景对象的子模块;
确定子模块对应的组合规则;
基于子模块的模块参数与组合规则,对子模块进行模块组合,得到组合后的场景对象;
根据组合后的场景对象,生成场景对象集合。
其中,待生成的场景对象的子模块可以为构成该待构成的场景对象的一部分。例如,场景对象为建筑物时,可以根据构成建筑物的各部分的尺寸信息,将建筑物拆分为不同的子模块,譬如,墙壁、窗户、墙角、大门等等。
其中,组合规则可以为描述组合子模块时所应遵循的规则。例如,当场景对象为建筑物时,子模块的组合规则可以为在建筑物主体的基础上由内向外地组合子模块,从而得到组合后的建筑物;等等。在实际应用中,组合规则可以基于业务需求进行设置。
其中,模块参数可以为子模块的相关参数,可以通过模块参数来描述子模块的外形、以及组合位置等信息。例如,模块参数可以包括子模块的尺寸参数、位置参数、颜色参数;等等。
在一实施例中,场景对象可以为建筑物,参考图6,可以获取图6中所示的待生成的建筑物的各个建筑子模块。进一步地,可以基于建筑子模块的组合规则,如,在图7中左图所示的建筑主体的基础上,依照各子模块的模块参数,如位置参数、尺寸参数等,通过结合该组合规则,对子模块进行模块组合,得到图7右图所示的组合后的建筑物。
同样地,可以生成目标虚拟场景中的其他场景对象,进而得到待添加至场景划分网络中的场景对象集合。
104、将场景对象与划分标记数据进行属性匹配,得到分配给该划分标记数据的候选场景对象。
其中,属性匹配是指,将场景对象的对象属性与划分标记数据的线路属性进行匹配,以确定该场景对象是否适于分配在该划分标记数据对应的区域中。
举例来说,目标虚拟场景可以为虚拟城市,场景对象可以为建筑物,且划分标记数据可以为城市中的道路,则由于每条道路都有对应的属性,如,该道路适合承载的人口密度、该道路的路宽、该道路属于商业地段或住宅地段,等等;且每个建筑物也有其对应的属性,如,该建筑物适合承载的最大人口密度、该建筑物需要承载的最小人口密度、该建筑物所属的建筑类别、该建筑物的所属的建筑风格、该建筑物群所适合的建筑物密度;等等。
那么,对于待添加至城市道路网络的建筑物集合,可以通过将建筑物与城市道路网络中的道路A进行属性匹配,以确定在该建筑物集合中,哪些建筑物是分配给道路A的候选建筑物。
其中,将候选场景对象分配给划分标记数据是指,建立起该候选场景对象 与该划分标记数据在空间位置上的关联关系。例如,当将本申请描述的虚拟场景生成方法应用在虚拟城市生成时,目标虚拟场景即为虚拟城市,候选场景对象可以为候选建筑物,划分标记数据则可以为虚拟城市中的道路。那么,将候选建筑物A分配给道路B,指的是建立起候选建筑物A与道路B在空间位置上的关联关系,例如,可以为规定候选建筑物A需要毗邻道路B放置;也可以为规定候选建筑物A需要放置在道路B对应的街区中;也可以为规定候选建筑物A不允许放置在道路B上,否则会使得道路B交通阻塞,等等。
具体地,步骤“将场景对象与划分标记数据进行属性匹配,得到分配给该划分标记数据的候选场景对象”,可以包括:
确定场景对象的对象属性、以及划分标记数据的线路属性;
对该对象属性与该线路属性进行属性匹配;
将匹配通过的场景对象确定为分配给划分标记数据的候选场景对象。
其中,对象属性为场景对象的相关属性,例如,当场景对象为建筑物时,对象属性可以包括该建筑物适合承载的最大人口密度、该建筑物需要承载的最小人口密度、该建筑物所属的建筑类别、该建筑物的所属的建筑风格、该建筑物群所适合的建筑物密度;等等。
举例来说,可以结合本申请所描述的虚拟场景生成方法,在Houdini中进行城市建模以生成虚拟城市。参考图8,用户可以通过Houdini来设置建筑物的对象属性,具体地,如图8中801所示,用户将建筑物的高度设置为大于150米,使其属于高层建筑,如802所示,并且,将该建筑物需要承载的最小人口密度设置为0.5,如803所示,以及将该建筑物设置为需要毗邻高速公路,如804所示。
其中,线路属性为划分标记数据的相关属性,例如,当划分标记数据为虚拟城市中的道路时,线路属性可以包括该道路适合承载的人口密度、该道路的路宽、该道路属于商业地段或住宅地段,等等。
在一实施例中,场景对象的对象属性与划分标记数据的线路属性,均为可供用户定义的。那么,在用户定义了对象属性与线路属性后,终端可以相应地确定场景对象的对象属性与划分标记数据的线路属性,并通过对该对象属性与该线路属性进行属性匹配,以确定该场景对象是否为分配给该划分标记数据的候选场景对象。
进行属性匹配的方式可以有多种,例如,可以对划分标记数据的线路属性进行分析,以确定针对能够分配至该划分标记数据中的场景对象所设置的限制或要求。进一步地,可以通过分析场景对象的对象属性值,来确定该场景对象是否符合针对该划分标记数据所设置的限制或要求,进而确定该场景对象是否为可分配给该划分标记数据的候选场景对象。
在一实施例中,当目标虚拟场景为虚拟城市时,场景对象可以为建筑物,场景划分网络可以为城市道路网路。作为示例,可以参考前述步骤在初始虚拟 场景中生成图9所示的城市道路网络,其中,该城市道路网络中包括至少一条道路。
并且,在生成待添加至该城市道路网络中的建筑物集合后,可以将建筑物的对象属性与城市道路网络中各道路的线路属性进行匹配,以确定分配给各道路的候选建筑物。参考图10,该图中将属性匹配的结果可视化展示,可以看到,在图10中,各道路上均放置了若干与其属性匹配相符的建筑物,其中,用白色矩形与灰色矩形表示不同类别的建筑物,例如,白色矩形可以表示住宅建筑物,灰色矩形可以表示医疗建筑物。
值得注意的是,匹配通过的场景对象并不是最后的目标场景对象,例如,图10中展示的各建筑物并不为最终分配给道路的目标建筑物,因为仍然存在较为严重的碰撞现象,或称重叠现象,例如,建筑物之间的碰撞现象,以及建筑物与道路之间的碰撞现象。因此,仍然需要进一步的筛选出目标建筑物。
也就是说,通过属性匹配仅能确定分配给划分标记数据的候选场景对象,进一步地,可以通过执行步骤105及后续的步骤,来从候选场景对象中筛选出目标场景对象。
105、根据候选场景对象与划分标记数据之间的位置关联信息,从该候选场景对象中筛选出目标场景对象。
其中,位置关联信息为描述候选场景对象与划分标记数据在位置上如何关联的信息。值得注意的是,该位置可以为不同维度空间中的位置,例如,可以为二维平面中的位置;又如,可以为三维空间中的位置;又如,可以为更高维度空间中的位置;等等。
进一步地,候选场景对象与划分标记数据在位置上的关联,可以有多种情况,例如,可以为候选场景对象与划分标记数据在位置上重叠;又如,可以为候选场景对象与划分标记数据在位置上保持一定范围内的距离;又如,可以为候选场景对象与划分标记数据在位置上间隔一定阈值以上的距离;等等。
由于在生成目标虚拟场景,或者在搭建目标虚拟场景的过程中,需要将场景对象与场景划分网络中的划分标记数据进行匹配,以生成完整的场景,因此,需要在确定候选场景对象的基础上,进一步地确定可分配给划分标记数据的目标场景对象,例如,可以根据位置关联信息,从候选场景对象中筛选出目标场景对象。具体地,步骤“根据候选场景对象与划分线路之间的位置关联信息,从该候选场景对象中筛选出目标场景对象”,可以包括:
基于候选场景对象的几何特征,确定该候选场景对象与该划分线路之间的位置关联信息;
根据该位置关联信息,对该候选场景对象与该划分线路进行碰撞检测;
从碰撞检测通过的候选场景对象中,筛选出分配给该划分线路的目标场景对象。
其中,候选场景对象的几何特征为从几何维度对候选场景对象进行描述得 到的特征,例如,几何特征可以包括候选场景对象在场景中的位置、占据的面积或空间等特征。
由于位置关联信息为描述候选场景对象与划分标记数据在位置上如何关联的信息,因此,可以基于候选场景对象的几何特征,来确定该候选场景对象与划分标记数据之间的位置关联信息。例如,确定候选场景对象与划分标记数据在位置上是否重叠;又如,确定候选场景对象与划分标记数据是否在位置上保持一定范围内的距离;又如,确定候选场景对象与划分标记数据是否在位置上间隔一定阈值以上的距离;等等。
其中,碰撞检测为判断两个对象(或者称两个碰撞体之间)是否交迭或重叠的检测。碰撞检测可以包括静态碰撞体之间的碰撞检测、动态碰撞体之间的碰撞检测、以及静态碰撞体与动态碰撞体之间的碰撞检测。具体地,若碰撞体之间不存在交迭或重叠,则可以认为检测通过,否则,则可以认为检测未通过。
碰撞检测的实现方式可以有多种,例如,可以通过生成矩形或圆形来包裹住碰撞体,并通过检测该矩形或圆形之间是否发生交迭或重叠来实现;又如,可以通过迭代地生成若干矩形或圆形,使得通过若干矩形或圆形的组合形状来包裹住碰撞体,并通过检测不同的碰撞体对应的组合形状之间是否发生交迭或重叠来实现;等等。
在一实施例中,当目标虚拟场景为待生成的虚拟城市时,候选场景对象可以为候选建筑物,划分标记数据可以为虚拟城市中的道路。具体地,可以基于候选建筑物的几何特征,来确定该候选建筑物与道路之间的位置关联信息,并根据该位置关联信息,对该候选建筑物与该短路进行碰撞检测,以确定该候选建筑物是否被放置在该道路上造成了道路阻塞。
若检测通过,则表示该候选建筑物未放置在该道路上;若检测不通过,则表示该候选建筑物放置该道路上造成了道路阻塞。因此,可以从碰撞检测通过的候选建筑物中,进一步地筛选分配给该道路的目标建筑物。参考图11,图11中展示了在图10的基础上,对图10中的候选建筑物与城市道路网络中的道路进行碰撞检测后,筛除了检测未通过的候选建筑物之后的结果。
在本实施例中,可以进一步地从碰撞检测通过的候选场景对象中,筛选出分配给该划分标记数据的目标场景对象,具体地,步骤“从碰撞检测通过的候选场景对象中,筛选出分配给该划分标记数据的目标场景对象”,可以包括:
确定检测通过的候选场景对象所属的对象类别,其中,该对象类别具有对应的对象密度约束参数;
根据该对象密度约束参数,对该对象类别下的候选场景对象进行筛选,得到筛选后的目标场景对象作为分配给所述划分线路的目标场景对象。
其中,对象密度约束参数为描述特定对象类别的场景对象其自身对密度的约束要求。例如,若场景对象为虚拟城市中的建筑物,那么对象类别则可以为建筑物所属的类别,例如,建筑类别可以包括住宅建筑物、学校、监狱、写字 楼等等。不同类别的建筑物其自身具有不同的建筑物密度约束规则,例如,住宅建筑物的密度约束可以为0.7,表示住宅建筑物的最大密度为0.7;等等。
由于场景划分网络中可以包括有多个对象类别下的候选场景对象,而不同的对象类别下的候选场景对象可以具有不同的对象密度约束参数,因此,在确定检测通过的候选场景对象所属的对象类别后,即可以基于对象密度约束参数,对该对象类别下的候选场景对象进行筛选,以避免对象过密的情况。
参考图12,图12中展示了在图11的基础上,针对图11中不同类别的候选建筑物进行筛选后呈现的结果。具体地,在图11中,各类别的建筑物均设置有其对应的对象密度约束参数,若某类别下候选建筑物的当前密度与其对象密度约束参数不符,如,当前密度远大于对象密度约束参数,则基于该对象密度约束参数,删除该类别下的候选建筑物,以对该类别下的候选建筑物进行筛选,并得到图12中所示的筛选后的目标建筑物。
106、将目标场景对象与划分标记数据进行匹配,生成目标虚拟场景。
在通过前述步骤,从场景对象集合中确定候选场景对象,并进一步地确定目标场景对象后,由于目标场景对象之间还有可能存在碰撞现象,因此,还可以包括有针对目标场景对象之间的碰撞检测的步骤。具体地,步骤“将目标场景对象与划分标记数据进行匹配,生成目标虚拟场景”,可以包括:
对目标场景对象进行排序,以确定各目标场景对象的优先级别;
根据优先级别,对各目标场景对象进行碰撞检测;
将碰撞检测通过的目标场景对象与划分标记数据进行匹配,生成目标虚拟场景。
其中,由于对于划分标记数据来说,从候选场景对象中筛选出的目标场景对象仍然可能存在过密的问题,例如,若如图12所示将建筑物布局在道路网络中,图12中建筑物之间仍存在过密的问题。因此,可以通过对目标场景对象进行排序,来确定各目标场景对象的优先级别,以使得后续可以基于各目标场景对象的优先级别,选择出分配至划分标记数据的目标场景对象,例如,在密度限制的前提下,可以将优先级别高的目标场景对象分配至划分标记数据。
其中,对目标场景对象排序的方式可以有多种,例如,可以基于目标场景对象的对象属性,对目标场景对象进行排序;譬如,目标场景对象为建筑物时,可以基于建筑物的占地面积来对建筑物进行排序。又如,可以基于目标场景对象的对象类别,对目标场景对象进行排序;譬如,目标场景对象为建筑物时,可以规定住宅类建筑物比医疗类建筑物具有更高的优先级。
在通过对目标场景对象进行排序得到各目标场景对象的优先级别后,可以进一步地基于该优先级别,通过在目标场景对象之间进行碰撞检测,来确定可以分配至划分标记数据的目标场景对象。具体地,步骤“根据优先级别,对目标场景对象进行碰撞检测”,可以包括:
对所属同一对象类别的目标场景对象进行碰撞检测;
基于检测结果,对该对象类别对应的目标场景对象进行筛选,得到筛选后的目标场景对象;
基于筛选后的目标场景对象的优先级别,从筛选后的目标场景对象中确定出碰撞检测通过的目标场景对象。
值得注意的是,在此处对所属同一对象类别的目标场景对象进行碰撞检测,可以旨在获悉该对象类别下的目标场景对象的碰撞情况,因此,该检测结果为表征该对象类别下的目标场景对象之间碰撞程度或重叠程度的相关信息。
进一步地,可以基于检测结果,对该对象类别下的目标场景对象进行筛选,例如,可以在碰撞程度严重的情况下,筛除掉更多的目标场景对象,避免导致分配至划分标记数据的目标场景对象过密的问题。
在一实施例中,场景对象可以为建筑物,可以在图12所示的对候选建筑物进行筛选后所得的目标建筑物的基础上,通过对各类别的目标建筑物进行碰撞检测以获取该类别的目标建筑物当前的碰撞情况,并进一步地基于该碰撞情况,对该类别下的目标建筑物进行筛选,以避免建筑物过密的问题。例如,若某类别的目标建筑物,当前的碰撞情况较严重,则对该类别的目标建筑物设置较大的筛除比例。
又如,可以使用同样的筛除比例来筛除各类别下的目标建筑物,具体地,参考图13,图13呈现的是对于图12中各类别下的目标建筑物,均筛除50%后所剩的目标建筑物的效果。
进一步地,可以基于筛选后的目标场景对象的优先级别,从筛选后的目标场景对象中确定检测通过的目标场景对象。例如,参考图13可知,即使在对各类别下的目标建筑物均筛除50%后,剩余的目标建筑物之间仍旧存在碰撞问题。也就是说,通过前述的多次筛选方式,虽然可以有效地将目标场景对象的数量快速地缩减到更小的区间内,但是仍旧可能存在目标场景对象之间的碰撞问题。因此,可以依据剩余的目标场景对象的优先级别,从其中选择出最后分配至划分标记数据的目标场景对象。
例如,可以将优先级别最大的目标场景对象,确定为最终检测通过的目标场景对象。作为示例,参考图13可知,仍有多个目标建筑物之间存在有碰撞问题的现象,因此,可以在出现碰撞问题的区域,也即在目标建筑物重叠的区域,仅保存优先级最大的目标建筑物,而筛除掉其他的目标建筑物,例如,可以设置白色矩形代表的建筑物比灰色矩形代表的建筑物具有更高的优先级,那么,可以在不同类别的建筑物之间存在碰撞问题时,保留优先级大的建筑物,从而解决问题并生成图14所示的最后效果。
由上可知,本实施例可以大大提高虚拟场景生成的效率,具体地,该方案可以基于待生成的目标虚拟场景的场景特征信息,来生成与场景特征相符的场景划分网络,这可以使得最后基于该场景划分网络所生成的目标虚拟场景具有较高的仿真度与可信度。并且,该方案在将场景对象分配至场景划分网络中的 划分标记数据的过程中,既考虑了场景对象与划分标记数据之间的属性匹配程度,又考虑了场景对象与划分标记数据之间的位置关联信息,这不仅能够高效地确定场景对象在场景划分网络中应该放置的位置,还可以有效地避免在场景生成过程中由于位置重叠或物体碰撞导致的场景虚假问题。
此外,在该方案中,用户仅需提供描述待生成的目标虚拟场景的场景特征信息,以及待组装成场景对象的子模块,便即可通过该方案程序化地生成完整且生动的虚拟场景,这能够极大地提高虚拟场景生成的便捷程度与自动化程度。
根据上面实施例所描述的方法,以下将举例进一步详细说明。
在本实施例中,将以虚拟场景生成装置集成在服务器为例进行说明,该服务器可以是单台服务器,也可以是由多个服务器组成的服务器集群;该终端可以为手机、平板电脑、笔记本电脑等设备。
如图15所示,一种虚拟场景生成方法,具体流程如下:
201、终端向服务器发送待生成的目标虚拟场景对应的场景特征信息。
在本实施例中,可以将虚拟场景生成方法应用在游戏开发中,用以生成虚拟城市。终端可以向服务器发送描述待生成的虚拟城市对应的场景特征信息,如,人口密度分布数据。
202、服务器获取该场景特征信息。
203、服务器基于该场景特征信息,在初始虚拟场景中生成场景划分网络,其中,该场景划分网络包括至少一个划分标记数据,该划分标记数据用于划分初始虚拟场景。
在一实施例中,参考图16,服务器可以在Houdini软件中生成城市道路1601,例如,可以在设置道路风格16011后生成场景划分网络,即城市道路网络。值得注意的是,Houdini中还提供有手工生成道路的曲线1602以及修改道路1603的功能。
此外,服务器还可以对城市道路网络中的道路设置相应的配置1604,如道路属性等,进一步地,服务器还可以设置摆放道路设施1605,例如,垃圾桶、长椅等。可选的,还可以包括修改道路设施1606的功能。
204、服务器生成待添加至场景划分网络中的场景对象集合,其中,该场景对象集合包括至少一个场景对象。
在一实施例中,参考图16,服务器可以在获取由手工建模得到的建筑物子模块1607后,通过组合该子模块来模块化地生成建筑资产1608,从而生成待添加至城市道路网络中的建筑物集合。进一步地,服务器还可以对生成的建筑资产,进行手动摆放1609。
可选地,也可以通过手工建模1610的方式生成建筑物,进而得到建筑物集合。进一步地,服务器还可以对生成的建筑物设置对应的属性,从而对其设定摆放准则1611。
205、服务器将场景对象与场景划分网络中的划分标记数据进行属性匹配, 得到分配给该划分标记数据的候选场景对象。
在一实施例中,服务器可以通过匹配道路配置与建筑物的摆放准则1612来摆放建筑物,得到分配给该道路的候选建筑物。
206、服务器根据候选场景对象与划分标记数据之间的位置关联信息,从候选场景对象中筛选出目标场景对象。
在一实施例中,服务器通过根据候选建筑物与道路之间的位置关联信息,来对候选建筑物与道路之间进行碰撞检测,并对检测通过的目标建筑物之间,再进行二次碰撞检测,从而筛选出最终的目标建筑物。
207、服务器将目标场景对象与划分标记数据进行匹配,生成目标虚拟场景。
在一实施例中,可以将本申请实施例所述的虚拟场景生成方法,在Houdini内开发为一系列的可重复使用的数字资产文件格式(Houdini Digital Asset,HDA),以支持需要模拟城市的游戏项目使用,参考图16,在设定了道路配置和建筑资产的摆放准则后,可以自动根据配置和准则摆放资产1612,从而实现步骤205、206及207的执行。进一步,还可以对某些区域进行修改1613。
其中,图17即为HDA连接起来使用的节点网络,包括子网遮罩节点、道路生长节点、子网道路属性节点、子网建筑物节点、建筑物放置节点、道路实例模型导出节点、建筑物实例模型导出节点。基于该节点网络生成后得到的虚拟城市则如图18所示,进一步地,在Unreal Engine游戏引擎内呈现的效果则如图19中2001所示。
208、服务器生成该目标虚拟场景的场景渲染数据,并将该场景渲染数据发送给终端。
例如,参考图16,服务器可以将该场景渲染数据以缓存文件1614的形式保存,以使得终端可以基于该缓存文件展示服务器所生成的目标虚拟场景。
209、终端接收服务器发送的场景渲染数据,并基于该场景渲染数据展示生成后的目标虚拟场景。
例如,参考图16,终端上除了运行有Houdini引擎1615外,还可以运行有Unreal软件的引擎1616,或者Unity3D软件1616。并且,终端在接收到该场景渲染数据后,可以基于该场景渲染数据如图18或图19所示地展示服务器所渲染生成的虚拟城市。
当将上述方案应用在游戏应用中生成虚拟城市,并通过Houdini来实现时,可利用Houdini大量的建模功能,减低游戏3D美术的学习成本和提高可控性。例如,建筑模块的模型在游戏引擎内也可得到运行效率的最大优化,建筑模块的输出也可利用引擎对大范围场景特有的技术来加大运行效率,并且,建筑放置算法也可基于引擎现有的美术资产来加快制作效率。
为了更好地实施以上方法,相应的,本申请实施例还提供一种虚拟场景生成装置,其中,该虚拟场景生成装置可以集成在服务器或终端中。该服务器可以是单台服务器,也可以是由多个服务器组成的服务器集群;该终端可以为手 机、平板电脑、笔记本电脑等设备。如图20所示,该虚拟场景生成装置可以包括信息获取单元301,网络生成单元302,集合生成单元303,属性匹配单元304,目标筛选单元305以及目标分配单元306,如下:
信息获取单元301,用于获取待生成的目标虚拟场景对应的场景特征信息;
网络生成单元302,用于基于所述场景特征信息,在初始虚拟场景中生成场景划分网络,其中,所述场景划分网络包括至少一个划分标记数据,所述划分标记数据用于划分所述初始虚拟场景;
集合生成单元303,用于生成待添加至所述场景划分网络中的场景对象集合,其中,所述场景对象集合包括至少一个场景对象;
属性匹配单元304,用于将所述场景对象与所述划分标记数据进行属性匹配,得到分配给所述划分标记数据的候选场景对象;
目标筛选单元305,用于根据所述候选场景对象与所述划分标记数据之间的位置关联信息,从所述候选场景对象中筛选出目标场景对象;
目标分配单元306,用于将所述目标场景对象与所述划分标记数据进行匹配,生成目标虚拟场景。
在一实施例中,参考图21,所述划分标记数据为划分线路,所述网络生成单元302,可以包括:
基础生成子单元3021,用于基于所述场景特征信息,在所述初始虚拟场景中生成基础划分网络,其中,所述基础划分网络包括至少一条待调整的划分线路;
局部调整子单元3022,用于基于所述待调整的划分线路在所述基础划分网络中的线路交汇信息,对所述待调整的划分线路进行调整,得到调整后的划分线路;
网络确定子单元3023,用于根据所述调整后的划分线路,确定所述场景划分网络。
在一实施例中,所述基础生成子单元3021,可以用于:
确定生成所述基础划分网络所需的线路分布模式;对所述场景特征信息进行信息转换,得到所述场景特征信息对应的张量信息;基于所述线路分布模式与所述张量信息,在所述初始虚拟场景中生成所述基础划分网络。
在一实施例中,所述基础生成子单元3021,可以具体用于:
在所述初始虚拟场景中,生成服从所述线路分布模式的所述基础划分网络,其中,所述基础划分网络包括至少一条待校正的划分线路;根据所述张量信息,对所述待校正的划分线路进行几何校正,得到校正后的划分线路,作为所述基础划分网络中待调整的划分线路。
在一实施例中,所述局部调整子单元3022,可以用于:
基于所述线路交汇信息,设计线路约束规则,并确定待调整的目标划分线路;遵循所述线路约束规则,对所述目标划分线路进行调整,得到调整后的划分线路。
在一实施例中,参考图22,所述目标筛选单元305,可以包括:
关联确定子单元3051,用于基于所述候选场景对象的几何特征,确定所述 候选场景对象与所述划分线路之间的位置关联信息;
第一检测子单元3052,用于根据所述位置关联信息,对所述候选场景对象与所述划分线路进行碰撞检测;
目标筛选子单元3053,用于从所述碰撞检测通过的候选场景对象中,筛选出分配给所述划分线路的目标场景对象。
在一实施例中,所述目标筛选子单元3053,可以用于:
确定所述碰撞检测通过的候选场景对象所属的对象类别,其中,所述对象类别具有对应的对象密度约束参数;根据所述对象密度约束参数,对所述对象类别下的候选场景对象进行筛选,得到筛选后的目标场景对象作为分配给所述划分线路的目标场景对象。
在一实施例中,参考图23,所述目标分配单元306,可以包括:
对象排序子单元3061,可以用于对所述目标场景对象进行排序,确定各目标场景对象的优先级别;
第二检测子单元3062,可以用于根据所述优先级别,对各目标场景对象进行碰撞检测;
目标分配子单元3063,可以用于将所述碰撞检测通过的目标场景对象与所述划分标记数据进行匹配,生成目标虚拟场景。
在一实施例中,所述第二检测子单元3062,可以用于:
对所属同一对象类别的目标场景对象进行碰撞检测;基于所述检测结果,对所述对象类别对应的目标场景对象进行筛选,得到筛选后的目标场景对象;基于所述筛选后的目标场景对象的优先级别,从所述筛选后的目标场景对象中确定出所述碰撞检测通过的目标场景对象。
在一实施例中,参考图24,所述集合生成单元303,可以包括:
模块获取子单元3031,可以用于获取待生成的场景对象的子模块;
规则确定子单元3032,可以用于确定所述子模块对应的组合规则;
模块组合子单元3033,可以用于基于所述子模块的模块参数与所述组合规则,对所述子模块进行模块组合,得到组合后的场景对象;
集合生成子单元3034,可以用于根据所述组合后的场景对象,生成所述场景对象集合。
在一实施例中,参考图25,所述属性匹配单元304,可以包括:
属性确定子单元3041,可以用于确定所述场景对象的对象属性、以及所述划分标记数据的线路属性;
属性匹配子单元3042,可以用于对所述对象属性与所述线路属性进行属性匹配;
候选确定子单元3043,可以用于将匹配通过的场景对象确定为分配给所述划分标记数据的候选场景对象。
此外,本申请实施例还提供一种电子设备,该电子设备可以为服务器或终 端等设备,如图26所示,其示出了本申请实施例所涉及的电子设备的结构示意图。该电子设备包括有一个或一个以上计算机可读存储介质的存储器401、输入单元402、显示单元403、无线保真(WiFi,Wireless Fidelity)模块404、包括有一个或者一个以上处理核心的处理器405、以及电源406等部件。其中:
存储器401可用于存储软件程序以及模块,处理器405通过运行存储在存储器401的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器401可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器401还可以包括存储器控制器,以提供处理器405和输入单元402对存储器401的访问。
输入单元402可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
显示单元403可用于显示由用户输入的信息或提供给用户的信息以及电子设备的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。
处理器405是电子设备的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器401内的软件程序和/或模块,以及调用存储在存储器401内的数据,执行电子设备的各种功能和处理数据,从而对手机进行整体监控。
电子设备还包括给各个部件供电的电源406(比如电池),优选的,电源可以通过电源管理系统与处理器405逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储介质中,并由处理器进行加载和执行。
为此,本申请实施例提供一种计算机可读存储介质,其中存储有多条指令,该指令能够被处理器进行加载,以执行本申请实施例所提供的任一种虚拟场景生成方法中的步骤。其中,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述虚拟场景生成方面的各种可选实现方式中提供的方法。
以上对本申请实施例所提供的一种虚拟场景生成方法、装置、设备和存储介质进行详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均 会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (24)

  1. 一种虚拟场景生成方法,由电子设备执行,包括:
    获取待生成的目标虚拟场景对应的场景特征信息;
    基于所述场景特征信息,在初始虚拟场景中生成场景划分网络,其中,所述场景划分网络包括至少一个划分标记数据,所述划分标记数据用于划分所述初始虚拟场景;
    生成待添加至所述场景划分网络中的场景对象集合,其中,所述场景对象集合包括至少一个场景对象;
    将所述场景对象与所述划分标记数据进行属性匹配,得到分配给所述划分标记数据的候选场景对象;
    根据所述候选场景对象与所述划分标记数据之间的位置关联信息,从所述候选场景对象中筛选出目标场景对象;
    将所述目标场景对象与所述划分标记数据进行匹配,生成目标虚拟场景。
  2. 根据权利要求1所述的虚拟场景生成方法,其中,所述划分标记数据为划分线路,所述基于所述场景特征信息,在初始虚拟场景中生成场景划分网络,包括:
    基于所述场景特征信息,在所述初始虚拟场景中生成基础划分网络,其中,所述基础划分网络包括至少一条待调整的划分线路;
    基于所述待调整的划分线路在所述基础划分网络中的线路交汇信息,对所述待调整的划分线路进行调整,得到调整后的划分线路;
    根据所述调整后的划分线路,确定所述场景划分网络。
  3. 根据权利要求2所述的虚拟场景生成方法,其中,所述基于所述场景特征信息,在所述初始虚拟场景中生成基础划分网络,包括:
    确定生成所述基础划分网络所需的线路分布模式;
    对所述场景特征信息进行信息转换,得到所述场景特征信息对应的张量信息;
    基于所述线路分布模式与所述张量信息,在所述初始虚拟场景中生成所述基础划分网络。
  4. 根据权利要求3所述的虚拟场景生成方法,其中,所述基于所述线路分布模式与所述张量信息,在所述初始虚拟场景中生成所述基础划分网络,包括:
    在所述初始虚拟场景中,生成服从所述线路分布模式的所述基础划分网络,其中,所述基础划分网络包括至少一条待校正的划分线路;
    根据所述张量信息,对所述待校正的划分线路进行几何校正,得到校正后的划分线路,作为所述基础划分网络中待调整的划分线路。
  5. 根据权利要求2所述的虚拟场景生成方法,其中,所述基于所述待调整的划分线路在所述基础划分网络中的线路交汇信息,对所述待调整的划分线路进行调整,得到调整后的划分线路,包括:
    基于所述线路交汇信息,设计线路约束规则,并确定待调整的目标划分线路;
    遵循所述线路约束规则,对所述目标划分线路进行调整,得到调整后的划分线路。
  6. 根据权利要求1所述的虚拟场景生成方法,其中,所述划分标记数据为划分线路,所述根据所述候选场景对象与所述划分标记数据之间的位置关联信息,从所述候选场景对象中筛选出目标场景对象,包括:
    基于所述候选场景对象的几何特征,确定所述候选场景对象与所述划分线路之间的位置关联信息;
    根据所述位置关联信息,对所述候选场景对象与所述划分线路进行碰撞检测;
    从所述碰撞检测通过的候选场景对象中,筛选出分配给所述划分线路的目标场景对象。
  7. 根据权利要求6所述的虚拟场景生成方法,其中,所述从所述碰撞检测通过的候选场景对象中,筛选出分配给所述划分线路的目标场景对象,包括:
    确定所述碰撞检测通过的候选场景对象所属的对象类别,其中,所述对象类别具有对应的对象密度约束参数;
    根据所述对象密度约束参数,对所述对象类别下的候选场景对象进行筛选,得到筛选后的目标场景对象作为分配给所述划分线路的目标场景对象。
  8. 根据权利要求1所述的虚拟场景生成方法,其中,所述将所述目标场景对象与所述划分标记数据进行匹配,生成目标虚拟场景,包括:
    对所述目标场景对象进行排序,确定各目标场景对象的优先级别;
    根据所述优先级别,对各目标场景对象进行碰撞检测;
    将所述碰撞检测通过的目标场景对象与所述划分标记数据进行匹配,生成目标虚拟场景。
  9. 根据权利要求8所述的虚拟场景生成方法,其中,所述根据所述优先级别,对各目标场景对象进行碰撞检测,包括:
    对所属同一对象类别的目标场景对象进行碰撞检测;
    基于所述检测结果,对所述对象类别对应的目标场景对象进行筛选,得到筛选后的目标场景对象;
    基于所述筛选后的目标场景对象的优先级别,从所述筛选后的目标场景对象中确定出所述碰撞检测通过的目标场景对象。
  10. 根据权利要求1所述的虚拟场景生成方法,其中,所述生成待添加至所述场景划分网络中的场景对象集合,包括:
    获取待生成的场景对象的子模块;
    确定所述子模块对应的组合规则;
    基于所述子模块的模块参数与所述组合规则,对所述子模块进行模块组合,得到组合后的场景对象;
    根据所述组合后的场景对象,生成所述场景对象集合。
  11. 根据权利要求1所述的虚拟场景生成方法,其中,所述将所述场景对象与 所述划分标记数据进行属性匹配,得到分配给所述划分标记数据的候选场景对象,包括:
    确定所述场景对象的对象属性、以及所述划分标记数据的线路属性;
    对所述对象属性与所述线路属性进行属性匹配;
    将匹配通过的场景对象确定为分配给所述划分标记数据的候选场景对象。
  12. 一种虚拟场景生成装置,其特征在于,包括:
    信息获取单元,用于获取待生成的目标虚拟场景对应的场景特征信息;
    网络生成单元,用于基于所述场景特征信息,在初始虚拟场景中生成场景划分网络,其中,所述场景划分网络包括至少一个划分标记数据,所述划分标记数据用于划分所述初始虚拟场景;
    集合生成单元,用于生成待添加至所述场景划分网络中的场景对象集合,其中,所述场景对象集合包括至少一个场景对象;
    属性匹配单元,用于将所述场景对象与所述划分标记数据进行属性匹配,得到分配给所述划分标记数据的候选场景对象;
    目标筛选单元,用于根据所述候选场景对象与所述划分标记数据之间的位置关联信息,从所述候选场景对象中筛选出目标场景对象;
    目标分配单元,用于将所述目标场景对象与所述划分标记数据进行匹配,生成目标虚拟场景。
  13. 根据权利要求12所述的虚拟场景生成装置,其中,所述划分标记数据为划分线路,所述网络生成单元,包括:
    基础生成子单元,用于基于所述场景特征信息,在所述初始虚拟场景中生成基础划分网络,其中,所述基础划分网络包括至少一条待调整的划分线路;
    局部调整子单元,用于基于所述待调整的划分线路在所述基础划分网络中的线路交汇信息,对所述待调整的划分线路进行调整,得到调整后的划分线路;
    网络确定子单元,用于根据所述调整后的划分线路,确定所述场景划分网络。
  14. 根据权利要求13所述的虚拟场景生成装置,其中,所述基础生成子单元,用于:
    确定生成所述基础划分网络所需的线路分布模式;对所述场景特征信息进行信息转换,得到所述场景特征信息对应的张量信息;基于所述线路分布模式与所述张量信息,在所述初始虚拟场景中生成所述基础划分网络。
  15. 根据权利要求14所述的虚拟场景生成装置,其中,所述基础生成子单元,用于:
    在所述初始虚拟场景中,生成服从所述线路分布模式的所述基础划分网络,其中,所述基础划分网络包括至少一条待校正的划分线路;根据所述张量信息,对所述待校正的划分线路进行几何校正,得到校正后的划分线路,作为所述基础划分网络中待调整的划分线路。
  16. 根据权利要求13所述的虚拟场景生成装置,其中,所述局部调整子单元, 用于:
    基于所述线路交汇信息,设计线路约束规则,并确定待调整的目标划分线路;遵循所述线路约束规则,对所述目标划分线路进行调整,得到调整后的划分线路。
  17. 根据权利要求12所述的虚拟场景生成装置,其中,所述划分标记数据为划分线路,所述目标筛选单元,包括:
    关联确定子单元,用于基于所述候选场景对象的几何特征,确定所述候选场景对象与所述划分线路之间的位置关联信息;
    第一检测子单元,用于根据所述位置关联信息,对所述候选场景对象与所述划分线路进行碰撞检测;
    目标筛选子单元,用于从所述碰撞检测通过的候选场景对象中,筛选出分配给所述划分线路的目标场景对象。
  18. 根据权利要求17所述的虚拟场景生成装置,其中,所述目标筛选子单元,用于:
    确定所述碰撞检测通过的候选场景对象所属的对象类别,其中,所述对象类别具有对应的对象密度约束参数;根据所述对象密度约束参数,对所述对象类别下的候选场景对象进行筛选,得到筛选后的目标场景对象作为分配给所述划分线路的目标场景对象。
  19. 根据权利要求12所述的虚拟场景生成装置,其中,所述目标分配单元,包括:
    对象排序子单元,用于对所述目标场景对象进行排序,确定各目标场景对象的优先级别;
    第二检测子单元,用于根据所述优先级别,对各目标场景对象进行碰撞检测;
    目标分配子单元,用于将所述碰撞检测通过的目标场景对象与所述划分标记数据进行匹配,生成目标虚拟场景。
  20. 根据权利要求19所述的虚拟场景生成装置,其中,所述第二检测子单元,用于:
    对所属同一对象类别的目标场景对象进行碰撞检测;基于所述检测结果,对所述对象类别对应的目标场景对象进行筛选,得到筛选后的目标场景对象;基于所述筛选后的目标场景对象的优先级别,从所述筛选后的目标场景对象中确定出所述碰撞检测通过的目标场景对象。
  21. 根据权利要求12所述的虚拟场景生成装置,其中,所述集合生成单元,包括:
    模块获取子单元,用于获取待生成的场景对象的子模块;
    规则确定子单元,用于确定所述子模块对应的组合规则;
    模块组合子单元,用于基于所述子模块的模块参数与所述组合规则,对所述子模块进行模块组合,得到组合后的场景对象;
    集合生成子单元,用于根据所述组合后的场景对象,生成所述场景对象集合。
  22. 根据权利要求12所述的虚拟场景生成装置,其中,所述属性匹配单元,包括:
    属性确定子单元,用于确定所述场景对象的对象属性、以及所述划分标记数据的线路属性;
    属性匹配子单元,用于对所述对象属性与所述线路属性进行属性匹配;
    候选确定子单元,用于将匹配通过的场景对象确定为分配给所述划分标记数据的候选场景对象。
  23. 一种电子设备,其特征在于,包括存储器和处理器;所述存储器存储有应用程序,所述处理器用于运行所述存储器内的应用程序,以执行权利要求1至11任一项所述的虚拟场景生成方法中的操作。
  24. 一种计算机可读存储介质,其特征在于,所述存储介质存储有多条指令,所述指令适于处理器进行加载,以执行权利要求1至11任一项所述的虚拟场景生成方法中的步骤。
PCT/CN2022/073766 2021-02-07 2022-01-25 一种虚拟场景生成方法、装置、设备和存储介质 WO2022166681A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/985,639 US20230071213A1 (en) 2021-02-07 2022-11-11 Virtual scenario generation method, apparatus and device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110178014.2 2021-02-07
CN202110178014.2A CN112784002B (zh) 2021-02-07 2021-02-07 一种虚拟场景生成方法、装置、设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/985,639 Continuation US20230071213A1 (en) 2021-02-07 2022-11-11 Virtual scenario generation method, apparatus and device and storage medium

Publications (1)

Publication Number Publication Date
WO2022166681A1 true WO2022166681A1 (zh) 2022-08-11

Family

ID=75761382

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/073766 WO2022166681A1 (zh) 2021-02-07 2022-01-25 一种虚拟场景生成方法、装置、设备和存储介质

Country Status (3)

Country Link
US (1) US20230071213A1 (zh)
CN (1) CN112784002B (zh)
WO (1) WO2022166681A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784002B (zh) * 2021-02-07 2023-09-26 腾讯科技(深圳)有限公司 一种虚拟场景生成方法、装置、设备和存储介质
CN113791914B (zh) * 2021-11-17 2022-03-11 腾讯科技(深圳)有限公司 对象处理方法、装置、计算机设备、存储介质及产品
CN114972627A (zh) * 2022-04-11 2022-08-30 深圳元象信息科技有限公司 场景生成方法、电子设备及存储介质
CN115060269A (zh) * 2022-06-08 2022-09-16 南威软件股份有限公司 一种刻画道路路网与车辆行驶轨迹模型的方法及装置
CN115770394B (zh) * 2023-02-10 2023-05-23 广州美术学院 基于hapi实现的蓝图化插件应用方法
CN116822259B (zh) * 2023-08-30 2023-11-24 北京国网信通埃森哲信息技术有限公司 基于场景模拟的评价信息生成方法、装置和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754458A (zh) * 2018-12-11 2019-05-14 新华三大数据技术有限公司 三维场景的构建方法、装置及计算机可读存储介质
CN110473293A (zh) * 2019-07-30 2019-11-19 Oppo广东移动通信有限公司 虚拟对象处理方法及装置、存储介质和电子设备
US20200312042A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Three dimensional reconstruction of objects based on geolocation and image data
CN111921203A (zh) * 2020-08-21 2020-11-13 腾讯科技(深圳)有限公司 虚拟场景中的互动处理方法、装置、电子设备及存储介质
CN112784002A (zh) * 2021-02-07 2021-05-11 腾讯科技(深圳)有限公司 一种虚拟场景生成方法、装置、设备和存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303773A (zh) * 2008-06-10 2008-11-12 中国科学院计算技术研究所 一种虚拟场景生成方法及系统
CN110795819B (zh) * 2019-09-16 2022-05-20 腾讯科技(深圳)有限公司 自动驾驶仿真场景的生成方法和装置、存储介质
CN111815784A (zh) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 现实模型的呈现方法及装置、电子设备和存储介质
CN111921195B (zh) * 2020-09-24 2020-12-29 成都完美天智游科技有限公司 三维场景的生成方法和装置、存储介质和电子装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754458A (zh) * 2018-12-11 2019-05-14 新华三大数据技术有限公司 三维场景的构建方法、装置及计算机可读存储介质
US20200312042A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Three dimensional reconstruction of objects based on geolocation and image data
CN110473293A (zh) * 2019-07-30 2019-11-19 Oppo广东移动通信有限公司 虚拟对象处理方法及装置、存储介质和电子设备
CN111921203A (zh) * 2020-08-21 2020-11-13 腾讯科技(深圳)有限公司 虚拟场景中的互动处理方法、装置、电子设备及存储介质
CN112784002A (zh) * 2021-02-07 2021-05-11 腾讯科技(深圳)有限公司 一种虚拟场景生成方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN112784002A (zh) 2021-05-11
CN112784002B (zh) 2023-09-26
US20230071213A1 (en) 2023-03-09

Similar Documents

Publication Publication Date Title
WO2022166681A1 (zh) 一种虚拟场景生成方法、装置、设备和存储介质
CN111008422B (zh) 一种建筑物实景地图制作方法及系统
US20230074265A1 (en) Virtual scenario generation method and apparatus, computer device and storage medium
CN110321443B (zh) 三维实景模型数据库构建方法、装置及数据服务系统
CN113112603B (zh) 三维模型优化的方法和装置
US20210398331A1 (en) Method for coloring a target image, and device and computer program therefor
CN114758337B (zh) 一种语义实例重建方法、装置、设备及介质
CN110478898B (zh) 游戏中虚拟场景的配置方法及装置、存储介质及电子设备
CN111221867A (zh) 一种保护性建筑信息管理系统
CN112528508A (zh) 电磁可视化方法和装置
CN114140588A (zh) 数字沙盘的创建方法、装置、电子设备及存储介质
She et al. 3D building model simplification method considering both model mesh and building structure
CN112053440A (zh) 单体化模型的确定方法及通信装置
Boim et al. A machine-learning approach to urban design interventions in non-planned settlements
Chen et al. Semantic segmentation and data fusion of microsoft bing 3d cities and small uav-based photogrammetric data
CN117710534A (zh) 基于改进教与学优化算法的动画协同制作方法
Shariatpour et al. Urban 3D Modeling as a Precursor of City Information Modeling and Digital Twin for Smart City Era: A Case Study of the Narmak Neighborhood of Tehran City, Iran
CN113379748A (zh) 一种点云全景分割方法和装置
CN115033972B (zh) 一种建筑主体结构批量单体化方法、系统及可读存储介质
CN113989680B (zh) 建筑三维场景自动构建方法及系统
Bi et al. Research on CIM basic platform construction
Wang et al. Integration of 3DGIS and BIM and its application in visual detection of concealed facilities
CN115858843A (zh) 一种街区形态的城市空间图谱信息平台及其构建方法
CN114943407A (zh) 区域规划方法、装置、设备、可读存储介质及程序产品
CN114491779A (zh) 一种智能规划生成草图的方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22748960

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.12.2023)