WO2024009126A1 - A method for generating a virtual data set of 3d environments - Google Patents

A method for generating a virtual data set of 3d environments Download PDF

Info

Publication number
WO2024009126A1
WO2024009126A1 PCT/IB2022/056245 IB2022056245W WO2024009126A1 WO 2024009126 A1 WO2024009126 A1 WO 2024009126A1 IB 2022056245 W IB2022056245 W IB 2022056245W WO 2024009126 A1 WO2024009126 A1 WO 2024009126A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
procedural
singular
models
data set
Prior art date
Application number
PCT/IB2022/056245
Other languages
French (fr)
Inventor
Merih OZTAYLAN
Kerem PAR
Ali Ufuk PEKER
Original Assignee
Capoom Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capoom Inc. filed Critical Capoom Inc.
Priority to PCT/IB2022/056245 priority Critical patent/WO2024009126A1/en
Publication of WO2024009126A1 publication Critical patent/WO2024009126A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Abstract

The present invention relates to a method for generating a virtual data set of 3D environments. The method comprises the steps of; hierarchically categorizing objects into primary classes wherein at least one primary class has at least one secondary sub-class, creating a set of procedural models of said objects with a predetermined number of parameters corresponding to said secondary sub-class, running said set of procedural models through an algorithm that is suitable for randomly choosing a set of parameters and returning a plurality of singular procedural models pertaining to the corresponding procedural model set with the chosen parameters, wherein the singular procedural models and the placement data within a predefined terrain for each of the singular procedural models are taken as inputs to be used within a scene,generating a scene based on the randomly generated singular procedural model inputs where each singular procedural model is placed on the scene according to the corresponding placement data in order to obtain a scene with the aggregated singular procedural models, rendering an output for at least a part of said scene, wherein the camera angle to be rendered is determined based on a predefined secondary sub-class and at least one render for each scene is obtained, and, repeating the scene generation and aggregating the render outputs to obtain a virtual data set.

Description

A METHOD FOR GENERATING A VIRTUAL DATA SET OF 3D ENVIRONMENTS
Technical Field of the Invention
The present invention generally relates to a method for generating a virtual data set of 3D environments.
Background of the Invention
Manual 3D modelling is carried out by humans by means of software packages such as 3D Studio Max, Maya, Cinema 4D, Rhino etc. Although this method allows highly realistic models and environments to be created, it is immensely labor intensive, time consuming and therefore, costly. These disadvantages of manual 3D modelling have raised a need for automated processes for 3D modelling and eventually automated 3D modelling methods have emergedin the related technical field. Said automated methods are mostly based on 3D scanning, e.g., LIDAR and photogrammetry, where a vast number of 3D points are scanned to be converted into corresponding virtual 3D models.
WO 2021 178708 Al relates to a method for transforming real-world objects into 3D models. Said method comprises the steps of obtaining input imagery of a real-world object at an object modeling system, the input imagery captured using an imaging system from a designated viewing angle; generating a 3D model of the real-world object based on the input imagery using the object modeling system, the 3D model generated based on a plurality of stages corresponding to a sequence of polygons stacked in a direction corresponding to the designated viewing angle; and outputting the 3D model for presentation using a presentation system.
US 2021 082183 Al discloses a method to reconstruct a 3D virtual model from a 2D aerial image. Two-dimensional aerial images and other geo-spatial information are processed to produce land classification data, vector data and attribute data for buildings found within the images. This data is stored upon a server computer within shape files, and also stored are source code scripts describing how to reconstruct a type of building along with compiled versions of the scripts. A software game or simulator executes upon a client computer in which an avatar moves within a landscape. A classifier classifies a type of building in the shape file to execute the appropriate script Depending upon its location, a scene composer downloads a shape file, and a compiled script is executed in order to reconstruct any number of buildings in the vicinity of the avatar. The script produces a three-dimensional textured mesh which is then rendered upon a screen of the client computer to display a two-dimensional representation of the building. CN 110648389 A provides a 3D reconstruction method of urban street view based on the collaboration of unmanned aerial vehicle and edge vehicle. Through the method of collecting street view pictures in dense urban areas by multiple vehicles at the edge and transmitting them to the loop test unit server, a distributed network of multi-node data collection is formed, which realizes multi-perspective shooting and large-scale collection of road and street view picture information.
Above-mentioned attempts to automate the 3D modelling process give rise to certain disadvantages and drawbacks. The models created by these methods have arbitrary topologies and unreasonable file sizes. In addition, transparent and reflective surfaces cannot be adequately captured with these methods and most importantly, these models do not contain any semantic information. The semantic information is particularly crucial for 3D models since it allows a human operator or a deep learning algorithm to animate or modify the model ata later time. When a scene is manually modelled, this information is not lost as every object in the scene is modelled separately and the digital copy of the scene contains semantically segmented information for every object that exists in the scene. Hence, manual 3D modelling allows a high level of variation and pinpoint accuracy. Currently, these objectives can only be achieved by means of manual 3D modelling methods and existing automated 3D modelling methods fail to meet the requirements. Thus, it is an object of the present invention to provide an automated method for generating a virtual data set for 3D environments to overcome the forementioned drawbacks and shortcomings of the state of the art and achieve further advantages.
Brief Description of the Figures
The figures, whose brief explanations are herewith provided, are solely intended for providing a better understanding of the present invention and are as such not intended to define the scope of protection or the context in which said scope is to be interpreted in the absence of the description.
Figure 1 illustrates an exemplary flow diagram t for the occurrence generation of buildings according to the present invention.
Figure 2 illustrates an exemplary flow diagram for the occurrence generation of vegetation according to the present invention.
Figure 3 illustrates an exemplary flow diagram for the occurrence generation of path based structures according to the present invention.
Figure 4 illustrates an exemplary flow diagram for the scene rendering process according to the present invention. Figure 5 illustrates some of the window parameters that may be used to generate procedural models of windows according to the present invention.
Figure 6 illustrates some of the window types that may be used to generate procedural models of windows according to the present invention.
Figure 7 illustrates a plurality of exemplary implementations of windows according to the present invention.
Brief Description of the Invention
The present invention relates to a method for generating a virtual data set of 3D environments. The method comprises the steps of:
- hierarchically categorizing objects into primary classes wherein at least one primary class has at least one secondary sub-class,
- creating a set of procedural models of said objects with a predetermined number of parameters corresponding to said secondaiy sub-class,
- running said set of procedural models through an algorithm that is suitable for randomly choosing a set of parameters and returning a plurality of singular procedural models pertaining to the corresponding procedural model set with the chosen parameters, wherein the singular procedural models and the placement data within a predefined terrain for each of the singular procedural models are taken as inputs to be used within a scene,
- generating a scene based on the randomly generated singular procedural model inputs where each singular procedural model is placed on the scene according to the corresponding placement data in order to obtain a scene with the aggregated said singular procedural models,
- rendering an output for at least a part of said scene, wherein the camera angle to be rendered is determined based on a predefined secondary sub-class and at least one render for each scene is obtained, and,
- repeating the scene generation and aggregating the render outputs to obtain a virtual data set.
Thus, a method for automatically generating a virtual data set to overcome the aforementioned shortcomings of the existing approaches is obtained. The virtual data set obtained with the method according to the present invention contains all of the semantic information regarding the objects therein, and can have a high level of variation in terms of objects. Furthermore, the information regarding transparent and reflective surfaces are captured and therefore, not lost unlike the current approachs since the virtual data set obtained by means of the proposed method includes the semantic information along with the 3D models. Additionally, the file size and the scale of the data set can easily be adjusted depending on the application and available computing power.
According to an embodiment, two primary classes are used for the method, which are single occurrence objects and path based objects.
According to an embodiment of the method, creating a set of procedural models of said objects comprises specifying said parameters by a user to define the context of the scene to be generated and the algorithm returns at least one singular procedural model in accordance with said specific parameters. This is particularly advantageous when a specific instance of a procedural model is required or desired to exist in the data set, or if the data set is desired to resemble a specific environmental setting.
According to an embodiment of the method, the output comprises at least one of parameter data, placement data, 2D segmentation data and 3D segmentation data. This allows the data set to contain as much semantic information as possible and preferably the output comprises all of the mentioned data.
According to an embodiment of the method, the placement data is fed into the algorithm as a plurality of 2D point positions projected on the predefined terrain. Said 2D point positions are preferably used to represent the placement data for single occurrence procedural models, such as buildings.
According to an embodiment of the method, the placement data is fed into the algorithm as at least one vector based spline projected on the predefined terrain. Said vector based splines are preferably used to represent the placement data for path based procedural models, such as roads.
According to an embodiment of the method, the render output is in the form of an image, or point clouds, or a combination of both.
In another aspect, the present invention relates to a virtual data set obtained with the above- mentioned method.
According to an embodiment of the virtual data set, the data set is adapted to be fed into and train a deep learning algorithm as its input in order to generate a plurality of virtual data sets. By means of this, it would be possible to train a deep learning algorithm that is designed to generate a vast amount of 3D objects in order to create photorealistic virtual worlds on a large scale by means of neural networks. Since a deep learning model’s success greatly depends on the quality and the size of the data set that is used for the training thereof, there are strict requirements that the data set must meet. For example, all information in the images and points clouds should be semantically segmented and each segment should be further labeled with information about its inherent properties. Further, the semantic information should be repeated in the data set with a high level of variation, and 3D point clouds and images should be synchronized. Currently, only manual 3D modelling methods can meet these demands. These objectives are achieved with the present invention in an automated manner.
In another aspect, the present invention relates to the use of said data set for training a neural network.
In a further aspect, the present invention relates to a computer storage medium comprising the said virtual data set, and a computer-readable medium comprising a program containing executable instructions which, when executed by a computer, causes the computer to perform the method according to the present invention.
Detailed Description of the Invention
The present invention mainly relates to a method for generating a virtual data set of 3D environments. The present invention is disclosed in more detail with reference to different embodiments hereinbelow.
Though the objects in the world have infinitely many variations, it is possible to generalize most of them to obtain a limited number of sets when the objects are evaluated through the main classes to which they belong. According to the first step of the method of the present invention, objects are hierarchically categorized into primary classes wherein at least one primary class has at least one secondary sub-class, which will be explained in more detail hereinbelow with illustrative examples.
According to an embodiment, two primary classes are used for the method, which are single occurrence objects and path based objects. With respect to this, said single occurrence objects and said path based objects can be classified into secondary sub-classes. With single occurrence objects, objects that can only exist at a single coordinate point on the world shall be understood, such as buildings (e.g., apartments, houses, religious buildings, high rises, monuments, factories, stadiums) and vegetation, and with path based objects, objects that don’t occupy a single place on the world shall be understood (i.e., continuous objects), such as roads and railways. Further, at least one of these primary classes has at least one secondary sub-class. For example, the secondary sub-classes for buildings can include but are not limited to, windows, doors, roofs, columns, chimneys, gutters, fences, antennas, post boxes, wall lamps, stairs, air conditioning, solar panels, etc. and the secondary sub-classes for vegetation can include but are not limited to, trees, bushes, grass, hedges, weeds, ivy, flowers etc. Said secondary sub-classes can also have their own sub-categories, such as leaves branches and trunks for vegetation. Said primary classes and secondary sub-classes for buildings and vegetation are given within exemplary flow diagrams in FIG. 1 and FIG. 2 respectively.
Similarly, the secondary sub-classes for path based objects can include but are not limited to, roads, railways, cycling infrastructure, walkways, living streets, bridges, tunnels etc. Like vegetation, these secondary sub-classes can have their own sub-categories such as traffic signs and garbage bins. In this case, these would not be classified as path based objects, since objects such as traffic lights are single occurrence objects that exist at a single coordinate point on the world and these objects would be defined with respect to the path based object they are connected to. It should be noted that only one primary class and only one secondary sub-class, for example, for a scene with only vegetation (which are single occurrence objects) and only flowers can also be used for specific purposes and these exemplary classes are not intended to define the scope of protection of the present invention. Said primary classes and secondary sub-classes for path based structures are given within an exemplary flow diagram in FIG. 3.
Procedural modeling is an umbrella term for a number of techniques in computer graphics to create 3D models and textures from predefined sets of rules and these techniques are known by persons having ordinary skill in the art For the second step ofthe method according to the present invention, a set of procedural models of said objects with a predetermined number of parameters corresponding to said secondary sub-class is created. An example of a window’s procedural model where window itself is the secondary sub-class with the corresponding parameters that define the model is given below for the illustration ofthe second step. The teachings below can be applied to any of the above-mentioned secondary sub-classes, or any secondary sub-class that is suitable for the same purpose.
A procedural window generator tool is created to generate various types and combinations of window types that can be modified with the provided parameters for further alternative results and variations. The system is composed of different networks that are assigned to handle each element separately. The procedurally modelled 3D components that constitute a window are inner and outer frames, glasses, glass frames, sills, and heads. Size, position, arrangement, and details for aesthetic purposes are the other parameters provided for modification of the windows. A two-dimensional rectangle shape with given size parameters creates the outline of the window skeleton. After the size is determined, it branches out to two types, regular type windows, and bow and bay type windows. This classification changes the topology as well as the size of the structure of the skeleton depending on which parameter is selected. Regular windows include the most common window types and can have either rounded tops or rectangle shaped tops. Bow and bay type windows are generated on 3D rectangles with given parameters. These parameters change the order of direction of the primitives and sizes to build the wall geometry. Some of the window parameters that may be used to generate procedural models of windows according to the present invention can be seen in FIG. 5.
Once the walls are generated, frame types that are generated in the frame generation network are placed according to given type conditions. Depending on the side they are placed, windows can have different combinations of frame types. There are also parameters for wall sills, window separations, roof profiles and picket types to generate different styles of bow and bay type windows. Windows can have different types of topologies depending on the style, some of which can be seen in FIG. 6. Some additional parameters such as "glass height” or "window separation” can be used to modify the topology of the window skeleton. Once a window frame has been generated thereby, the inside of the closed geometry can be built as glasses. Depending on the selected window frame type, multiple glasses that will form the window can be generated. Each glass set variation, depending on the frame type, has a separate parameter to create multiple configurations and therefore, appearances.
This information is run through the corresponding secondary sub-class network, which is glass network in this case, which can generate a plurality of different types of inner frames and these types of inner framer have unique parameters to allow the count, positions, thickness, and smoothness of the glass frames to be changed. The connection between the window and the wall it is attached to can have a secondary frame, i.e., outer frame, for enhancement purposes. This frame would be optional and has size and decoration parameters.
Procedurally generated additional elements incorporated into the window network are curtains, shutters, flowerpots, and awnings. The secondary sub-class network, i.e., the window network is configured to adapt the size and position parameters of these elements according to its structural conditions and rules. Thus, the implementation of additional elements into the window network is rendered possible in an efficient manner in a procedurally generated environment and some of the exemplary implementations of the windows can be seen in FIG. 7. Moreover, the system provides a single operation to generate a plurality of windows and window types by looping the parameter allocation process for each window module unlike traditional systems that require repetition of the entire operation for each window. This difference between the methodologies results in a drastic increase in the time efficiency depending on the module count. As mentioned above, these teachings regarding the creation of a window procedural model can be adapted and applied to other secondary sub-classes such as doors, roofs and trees in order to obtain a set of procedural models with a realistic level of variance and accurate representations with a minimum number of viable parameters.
Furthermore, path based procedural models and a set of these models can be generated on a predefined path that is projected on the base terrain by means of attributes such as normalized position on the path and the distance from the path. The main input parameter for these models is the path, which can be defined by any conventional vector based spline shape like beziers or b- splines. On any point of the path, parameters like road length, lane count, sidewalk type, separation type, etc. can be defined. Additional roadside items such as lamps can also be defined by placing points along the path that defines a perpendicular distance from the road with the parameters that define the item in question. Changes on point parameters are interpolated for applicable properties like road width.
The third step of the method according to the present invention is running the above-mentioned set of procedural models through an algorithm that is suitable for randomly choosing a set of parameters and returning a plurality of singular procedural models pertaining to the corresponding procedural model set with the chosen parameters, wherein the singular procedural models and the placement data within a predefined terrain for said singular procedural models are taken as inputs to be used within a scene.
The size of the data set to be generated is preferably determined here, as a user could define the number of desired singular procedural models, or it could be a fixed, predetermined number. It shall be understood that any type of a randomization algorithm suitable for randomly generating a set of parameters can be employed within this step and these types of randomization algorithms are known to persons having an ordinary skill in the art
According to an embodiment of the method, the placement data can be based on a plurality of 2D point positions projected on the predefined terrain. Alternatively, the placement data can be based on at least one vector based spline projected on the predefined terrain. According to another embodiment, a combination of the two is used for a 3D setting, wherein the placement data for single occurrence objects is based on a plurality of 2D point positions projected on the predefined terrain, and the placement data for path based objects are based on a vector based spline projected on the predefined terrain. After the parameters are selected, one or more singular procedural models are correspondingly generated and returned in a conventional format, such as FBX, Alembic, RS Proxy, etc. The model’s format can be specified and selected according to the requirements of the application and any other conventional 3D model format may be used for the same purpose. After the singular procedural models are generated, the placement data for every singular procedural model is taken along with the singular procedural models as inputs for the next step.
As mentioned above, the input regarding the placement data may be of a different type for different types of objects. For the next step, a scene based on the randomly generated singular procedural model inputs is generated, where each singular procedural model is placed on the scene according to the corresponding placement data in order to obtain a scene with the aggregated singular procedural models. At this point of the method, a full 3D asset for a plurality of singular procedural models placed on a predefined terrain according to their placement data is ready to be rendered.
The fifth step of the method is rendering an output for at least a part of said scene, wherein the camera angle to be rendered is determined based on a predefined secondary sub-class and at least one render for each scene is obtained. The focus of the final data set is defined during this step, as the scene Tenderer will render at least a part of a scene based on the predefined secondary subclass to determine where the rendering camera will look at
For example, if "windows" is specified as the secondaiy sub-class here, the rendering cameras will point at and zoom to windows in the scene from whichever position the Tenderer has placed them. The positions of the rendering cameras can be randomized by means of a randomizer in this regard, as long as they are looking at a predefined secondary sub-class. The scene’s lightning setup can also be randomized here if desired, to obtain a higher variance in the results.
For the last step, the scene generation is repeated, and the render outputs are aggregated to obtain a virtual data set with a plurality of renders.
Thus, a method for automatically generating a virtual data set to overcome the aforementioned shortcomings of the existing 3D modelling methods is obtained, comprising the steps of; hierarchically categorizing objects into primary classes wherein at least one primary class has at least one secondary sub-class, creating a set of procedural models of said objects with a predetermined number of parameters corresponding to said secondary sub-class, running said set of procedural models through an algorithm that is suitable for randomly choosing a set of parameters and returning a plurality of singular procedural models pertaining to the corresponding procedural model set with the chosen parameters, wherein the singular procedural models and the placement data within a predefined terrain for each of the singular procedural models are taken as inputs to be used within a scene, generating a scene based on the randomly generated singular procedural model inputs where each singular procedural model is placed on the scene according to the corresponding placement data in order to obtain a scene with the aggregated singular procedural models, rendering an output for at least a part of said scene, wherein the camera angle to be rendered is determined based on a predefined secondary subclass and at least one render for each scene is obtained, and, repeating the scene generation and aggregating the render outputs to obtain a virtual data set An exemplary flowchart of the method according to the present invention is given in FIG. 4.
According to an embodiment of the method, creating a set of procedural models of said objects comprises specifying said parameters by a user to define the context of the scene to be generated and the algorithm returns at least one singular procedural model in accordance with said specific parameters. This is particularly advantageous when a specific instance of a procedural model is required or desired to exist in the data set, or if the data set is desired to resemble a specific environmental setting. For example, the data set can be aimed to resemble a rural town. Accordingly, some parameters of procedural models can be further limited and some type of procedural models (i.e.., skyscrapers) can be excluded all together.
According to an embodiment of the method, the output comprises at least one of parameter data, placement data, 2D segmentation data and 3D segmentation data. In this regard, each render is accompanied by the input data that was used to build the scene. This allows the data set to contain as much semantic information as possible and preferably the output comprises all of the mentioned data. If the data set size is desired to be low, not all of the mentioned data would be used for the output. According to another embodiment of the method, the render output is in the form of an image, or point clouds, or a combination of both. These two embodiments can be combined into a single embodiment for an output in the form of, for example point clouds comprising 3D segmentation data.
In another aspect, the present invention relates to a virtual data set obtained with the method or any of its embodiments. Depending on the scale of the application and how many scenes are generated to build the data set, the data set could represent a house, street, a town, a city, etc. In a further aspect, the present invention relates to a computer storage medium comprising the said virtual data set, and a computer-readable medium comprising a program containing executable instructions which, when executed by a computer, causes the computer to perform the above- mentioned method to generate a virtual data set. According to an embodiment of the data set, the data set can be adapted to be fed into and train a deep learning algorithm in order to generate a plurality of virtual data sets where the deep learning is algorithm is intended to produce 3D models. In another aspect, the present invention relates to the use of the data set according to the present invention for training a neural network.
Further advantages, the possible embodiments and uses of the invention shall be apparent from the explanations above and exemplary figures appended to the present description. It is to be understood that the embodiments and variations shown and described herein are merely illustrative of the principles of this invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention.

Claims

1. A method for generating a virtual data set of 3D environments, comprising the steps of:
- hierarchically categorizing objects into primary classes wherein at least one primary class has at least one secondary sub-class,
- creating a set of procedural models of said objects with a predetermined number of parameters corresponding to said secondary sub-class,
- running said set of procedural models through an algorithm that is suitable for randomly choosing a set of parameters and returning a plurality of singular procedural models pertaining to the corresponding procedural model set with the chosen parameters, wherein the singular procedural models and the placement data within a predefined terrain for each of the singular procedural models are taken as inputs to be used within a scene,
- generating a scene based on the randomly generated singular procedural model inputs where each singular procedural model is placed on the scene according to the corresponding placement data in order to obtain a scene with the aggregated singular procedural models,
- rendering an output for at least a part of said scene, wherein a camera angle to be rendered is determined based on a predefined secondary sub-class and at least one render for each scene is obtained, and,
- repeating the scene generation and aggregating the render outputs to obtain a virtual data set.
2. The method of claim 1, wherein said primary classes are for single occurrence objects and path based objects.
3. The method of claim 1, wherein creating a set of procedural models of said objects comprises specifying said parameters by a user to define the context of the scene to be generated and the algorithm returns at least one singular procedural model in accordance with said specific parameters.
4. The method of claim 1, wherein the output comprises at least one of parameter data, placement data, 2D segmentation data and 3D segmentation data.
5. The method of claim 1, wherein the placement data is fed into the algorithm as a plurality of 2D point positions projected on the predefined terrain. The method of claim 1, wherein the placement data is fed into the algorithm as at least one vector based spline projected on the predefined terrain. The method of claim 1, wherein the render output is in the form of an image, or point clouds, or a combination of both. A virtual data set obtained with the method according to any of claims 1-7. The virtual data set according to claim 8, wherein the data set is adapted to be fed into and train a deep learning model as its input in order to generate a plurality of virtual data sets. The use of the data set according to claim 9 for training a neural network. A computer-readable storage medium comprising the virtual data set according to claim 8. A computer-readable medium comprising a program containing executable instructions which, when executed by a computer, causes the computer to perform the method according to claim 1.
PCT/IB2022/056245 2022-07-06 2022-07-06 A method for generating a virtual data set of 3d environments WO2024009126A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2022/056245 WO2024009126A1 (en) 2022-07-06 2022-07-06 A method for generating a virtual data set of 3d environments

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2022/056245 WO2024009126A1 (en) 2022-07-06 2022-07-06 A method for generating a virtual data set of 3d environments

Publications (1)

Publication Number Publication Date
WO2024009126A1 true WO2024009126A1 (en) 2024-01-11

Family

ID=82846235

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2022/056245 WO2024009126A1 (en) 2022-07-06 2022-07-06 A method for generating a virtual data set of 3d environments

Country Status (1)

Country Link
WO (1) WO2024009126A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180053345A1 (en) * 2016-08-18 2018-02-22 Robert Bosch Gmbh System and method for procedurally generated object distribution in regions of a three-dimensional virtual environment
US20190156151A1 (en) * 2017-09-07 2019-05-23 7D Labs, Inc. Method for image analysis
CN110648389A (en) 2019-08-22 2020-01-03 广东工业大学 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle
US20210082183A1 (en) 2019-09-13 2021-03-18 Bongfis GmbH Reality-based three-dimensional infrastructure reconstruction
WO2021178708A1 (en) 2020-03-04 2021-09-10 Geopipe, Inc. Systems and methods for inferring object from aerial imagery

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180053345A1 (en) * 2016-08-18 2018-02-22 Robert Bosch Gmbh System and method for procedurally generated object distribution in regions of a three-dimensional virtual environment
US20190156151A1 (en) * 2017-09-07 2019-05-23 7D Labs, Inc. Method for image analysis
CN110648389A (en) 2019-08-22 2020-01-03 广东工业大学 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle
US20210082183A1 (en) 2019-09-13 2021-03-18 Bongfis GmbH Reality-based three-dimensional infrastructure reconstruction
WO2021178708A1 (en) 2020-03-04 2021-09-10 Geopipe, Inc. Systems and methods for inferring object from aerial imagery

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MEIDA CHEN ET AL: "Generating synthetic photogrammetric data for training deep learning based 3D point cloud segmentation models", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 21 August 2020 (2020-08-21), XP081746401 *
MEIDA CHEN ET AL: "STPLS3D: A Large-Scale Synthetic and Real Aerial Photogrammetry 3D Point Cloud Dataset", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 17 March 2022 (2022-03-17), XP091191862 *

Similar Documents

Publication Publication Date Title
Li et al. Reconstructing building mass models from UAV images
Döllner et al. Continuous level-of-detail modeling of buildings in 3D city models
Lin et al. Semantic decomposition and reconstruction of residential scenes from LiDAR data
Meng et al. 3D building generalisation
CN105139445B (en) Scene reconstruction method and device
WO2019239211A2 (en) System and method for generating simulated scenes from open map data for machine learning
CN116342783B (en) Live-action three-dimensional model data rendering optimization method and system
CN115272591B (en) Geographic entity polymorphic expression method based on three-dimensional semantic model
CN110660125B (en) Three-dimensional modeling device for power distribution network system
WO2020211427A1 (en) Segmentation and recognition method, system, and storage medium based on scanning point cloud data
CN115841559A (en) Urban large scene reconstruction method based on nerve radiation field
CN115187647A (en) Vector-based road three-dimensional live-action structured modeling method
JP3162664B2 (en) Method and apparatus for creating three-dimensional cityscape information
Gao et al. Large-scale synthetic urban dataset for aerial scene understanding
CN114202622A (en) Virtual building generation method, device, equipment and computer readable storage medium
Flagg et al. Video-based crowd synthesis
Nedevschi Semantic segmentation learning for autonomous uavs using simulators and real data
Kerim et al. NOVA: Rendering virtual worlds with humans for computer vision tasks
WO2024009126A1 (en) A method for generating a virtual data set of 3d environments
Zhuo et al. A novel vehicle detection framework based on parallel vision
Zoellner et al. Reality Filtering: A Visual Time Machine in Augmented Reality.
Döllner et al. Smartbuildings-a concept for ad-hoc creation and refinement of 3d building models
Zara et al. Virtual campeche: A web based virtual three-dimensional tour
Wang et al. 3D Reconstruction and Rendering Models in Urban Architectural Design Using Kalman Filter Correction Algorithm
Yusufu Research on 3D Animation Production System of Industrial Internet of Things under Computer Artificial Intelligence Technology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22751817

Country of ref document: EP

Kind code of ref document: A1