CN116977787A - Training scene generation method and device, electronic equipment and readable storage medium - Google Patents
Training scene generation method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN116977787A CN116977787A CN202310921901.3A CN202310921901A CN116977787A CN 116977787 A CN116977787 A CN 116977787A CN 202310921901 A CN202310921901 A CN 202310921901A CN 116977787 A CN116977787 A CN 116977787A
- Authority
- CN
- China
- Prior art keywords
- model
- object model
- scene
- information
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 125
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000010276 construction Methods 0.000 claims description 26
- 230000004308 accommodation Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 11
- 230000000875 corresponding effect Effects 0.000 description 38
- 238000004088 simulation Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 11
- 230000036544 posture Effects 0.000 description 11
- 230000000694 effects Effects 0.000 description 7
- 230000009286 beneficial effect Effects 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000036961 partial effect Effects 0.000 description 2
- 230000010399 physical interaction Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 235000003166 Opuntia robusta Nutrition 0.000 description 1
- 244000218514 Opuntia robusta Species 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/13—Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/23—Design optimisation, verification or simulation using finite element methods [FEM] or finite difference methods [FDM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
- G06V20/36—Indoor scenes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Library & Information Science (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Computer Graphics (AREA)
- Architecture (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Civil Engineering (AREA)
- Structural Engineering (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides a training scene generation method, a training scene generation device, electronic equipment and a readable storage medium, and relates to the technical field of three-dimensional modeling. The training scene generation method comprises the following steps: obtaining an object model library; determining a target object model in an object model library through a domain random algorithm according to scene information of the scene model, wherein the target object model is matched with the scene information; and configuring the target object model into the scene model to generate a training scene file.
Description
Technical Field
The application relates to the technical field of three-dimensional modeling, in particular to a training scene generation method, a training scene generation device, electronic equipment and a readable storage medium.
Background
With the continuous update and development of robot technology, the application of robots is gradually expanded from the industrial field to the nondeterministic task under the complex scene of civilian families. Because the home scene has high unstructured property, the method is a mainstream technical means for improving the generalization capability of the autonomous operation of the service robot in the high unstructured environment by performing large-scale training and testing in the simulation scene.
In the related art, most of the built simulation scenes are built through standard three-dimensional modeling software, file formats required by various simulation software are output, but semantic segmentation related to the operation of a service robot and the requirements of tasks such as physical interaction with objects are ignored, conversion is only performed according to a fixed modeling environment, and the universal domain random capability is not provided.
Disclosure of Invention
The present application aims to solve one of the technical problems existing in the prior art or related technologies.
To this end, a first aspect of the present application proposes a training scenario generation method.
A second aspect of the present application proposes a training scenario generation apparatus.
A third aspect of the present application proposes a training scenario generation apparatus.
A fourth aspect of the application proposes a readable storage medium.
A fifth aspect of the application proposes a computer program product.
A sixth aspect of the application proposes an electronic device.
In view of this, according to a first aspect of the present application, there is provided a training scenario generation method, including: obtaining an object model library; determining a target object model in an object model library through a random algorithm according to scene information of the scene model, wherein the target object model is matched with the scene information; and configuring the target object model into the scene model to generate a training scene file.
According to a second aspect of the present application, there is provided a training scenario generation apparatus, including: the acquisition module is used for acquiring an object model library; the determining module is used for determining a target object model in the object model library according to the scene information of the scene model through a domain random algorithm, wherein the target object model is matched with the scene information; and the configuration module is used for configuring the target object model into the scene model to generate a training scene file.
According to a third aspect of the present application, there is provided a training scenario generation apparatus, including: a memory in which a program or instructions are stored; the processor executes the program or the instructions stored in the memory to implement the steps of the training scenario generation method according to any one of the first aspects, so that the method has all the beneficial technical effects of the training scenario generation method according to any one of the first aspects, and will not be described in detail herein.
According to a fourth aspect of the present application, there is provided a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the training scenario generation method as in any of the above-mentioned first aspects. Therefore, the training scenario generation method in any one of the above first aspects has all the beneficial technical effects, and will not be described in detail herein.
According to a fifth aspect of the present application a computer program product is presented which, when executed by a processor, implements the steps of the training scenario generation method as in any of the above-mentioned first aspects. Therefore, the training scenario generation method in any one of the above first aspects has all the beneficial technical effects, and will not be described in detail herein.
According to a sixth aspect of the present application there is provided an electronic device comprising: the training scenario generating apparatus as defined in the above second or third aspect, and/or the readable storage medium as defined in the above fourth aspect, and/or the computer program product as defined in the above fifth aspect, thus has all the technical advantages of the training scenario generating apparatus as defined in the above second or third aspect, and/or the readable storage medium as defined in the above fourth aspect, and/or the computer program product as defined in the above fifth aspect, which are not described in detail herein.
According to the technical scheme, when the scene model is acquired, scene information corresponding to the scene model is acquired, and a target object model matched with the scene information is randomly selected from an object model library according to a random algorithm. The training scene file with strong generalization randomness can be obtained by combining the randomly extracted target object model with the scene model.
Additional aspects and advantages of the application will be set forth in part in the description which follows, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates one of the schematic flow diagrams of a training scenario generation method provided in some embodiments of the application;
FIG. 2 illustrates a second schematic flow diagram of a training scenario generation method provided in some embodiments of the application;
FIG. 3 illustrates a third schematic flow diagram of a training scenario generation method provided in some embodiments of the present application;
FIG. 4 illustrates a fourth schematic flow diagram of a training scenario generation method provided in some embodiments of the application;
FIG. 5 illustrates a fifth schematic flow diagram of a training scenario generation method provided in some embodiments of the application;
FIG. 6 illustrates a schematic diagram of model metadata provided by some embodiments of the application;
FIG. 7 illustrates a schematic diagram of scene metadata provided by some embodiments of the application;
FIG. 8 illustrates a sixth schematic flow diagram of a training scenario generation method provided in some embodiments of the application;
FIG. 9 illustrates one of the block diagrams of training scenario generation apparatus provided by some embodiments of the present application;
FIG. 10 illustrates a second block diagram of a training scenario generation apparatus provided by some embodiments of the present application;
fig. 11 illustrates a block diagram of an electronic device provided by some embodiments of the application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the present embodiment and the features in the embodiment may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited to the specific embodiments disclosed below.
Training scenario generation methods, apparatuses, readable storage media, computer program products, and electronic devices according to some embodiments of the present application are described below with reference to fig. 1 through 11.
According to an embodiment of the present application, as shown in fig. 1, a training scenario generating method is provided, including:
102, obtaining an object model library;
step 104, determining a target object model in the object model library through a domain random algorithm according to the scene information of the scene model, wherein the target object model is matched with the scene information;
and 106, configuring the target object model into the scene model to generate a training scene file.
The embodiment of the application provides a training scene generation method which is used for building a simulation training scene of a robot. By acquiring the object model library and analyzing scene information in the scene model, a target object model matched with the object model in the object model library can be determined, wherein the target object model is a model searched in the object model library based on a random algorithm. And configuring the target object model into a scene model to generate a training scene file, searching the target object model in an object model library through a random algorithm, combining the searched target object model with the scene model to generate a corresponding training scene file, and improving the diversity capability of the training scene file. The object model library is pre-stored with a three-dimensional model of an object and metadata information thereof. After the object model library is built, three-dimensional object models can be queried and used based on object metadata in the object model library.
After the scene model is generated, adding reasonable physical parameters to the target object model in the scene model according to object parameter information prestored in a database corresponding to the individual example of the model.
Specifically, the robot may be a home sweeping robot, a guiding robot for a mall, a meal delivery robot for a restaurant, or the like, and further includes various service robots, such as: home service robots, etc.
In this embodiment, object models of different sources are stored in the object model library, and the object model library constructed by the object models of different sources enables a plurality of object models of different sources to be selected when constructing the training scene file.
The object model in the object model library can be obtained by forward modeling, reverse engineering scanning, and three-dimensional reconstruction.
In this embodiment, the scene model is the model required in building the training scene file, which is the carrier of the remaining object models. The scene information includes information pre-stored in the scene model including, but not limited to, scene category information, scene size information, scene function information, and the like. Through the scene information, the relevant information of the object model to be placed in the scene model can be determined, so that a plurality of object models which can be placed in the scene model in the object model library can be determined, and the target object model is extracted from the plurality of object models which are found based on the scene information through a rule algorithm.
The scene model can be obtained by direct downloading in a server, and the scene model can be constructed in a modeling mode.
In the embodiment of the application, when a scene model is acquired, scene information corresponding to the scene model is acquired, and a target object model matched with the scene information is randomly selected from an object model library according to a random algorithm. The training scene file with strong generalization randomness can be obtained by combining the randomly extracted target object model with the scene model.
As shown in fig. 2, in some embodiments, optionally, the target object model includes a first object model and a second object model, the first object model being capable of carrying the second object model; determining a target object model in the object model library through a domain stochastic algorithm according to scene information of the scene model, wherein the method comprises the following steps:
step 202, extracting a first model list from an object model library based on scene information;
step 204, extracting an individual instance of the first object model in the first model list through a domain random algorithm;
step 206, obtaining object category information of the first object model;
step 208, extracting a second model list from the object model library based on the object category information;
Step 210, extracting an individual instance of the second object model in the second model list by a domain randomization algorithm.
In the embodiment of the application, the first object model directly put into the scene model and the second object model put into the first object model can be found in the scene model part file based on the scene information through a domain random algorithm. And randomly extracting a first object model matched with the scene information, and randomly extracting a second object model matched with the object type information of the first object model, so that a large number of object models in the scene model are extracted through a domain random algorithm, and the generalization random capacity of the finally generated training scene file is further improved.
For example, the first object model may be a "table" model, the individual instance of the first object model is a "xx model table", the second object model may be a "cup" model, and the individual instance of the second object model is a "xx size wine glass".
In this embodiment, since the scene model is a scene model that trains object models in the scene file, it is possible to determine object models that can be directly configured into the scene model from the scene information. And storing the object models capable of being configured into the scene model in a first model list mode, so that subsequent extraction and searching are facilitated.
In this embodiment, the scene models in the first model list are all object models that can be directly configured into the scene models, and at this time, the first object models are randomly extracted from the first model list by a domain random algorithm, and the number of the first object models is at least one.
Illustratively, the scene model includes a house model, and the scene information includes room information and size information in the house model, for example: "kitchen" area. The object models which can be placed in the kitchen area can be found in the object model library through the scene information, and a first model list is generated based on the found object models, wherein the models in the first model list comprise a kitchen model, a microwave oven model and the like. Extracting first object models in a first model list through a domain random algorithm, wherein the number of the first object models is positively correlated with size information in scene information.
In this embodiment, after randomly extracting a first object model from a first model list, object class information of the first object model is acquired, the object class information being capable of representing a specific classification of the first object model. The second model list can be extracted from the object model library through the object type information of the first object model, the object models in the second model list are all object models which can be put to the first object model, at the moment, the second object models are randomly extracted from the second model list through a domain random algorithm, and the number of the second object models is at least one.
In this embodiment, after the second model list is extracted according to the object category information, a corresponding relationship is established between the second model list and the first object model, so that a corresponding second object model is conveniently found in the second model list through a domain random algorithm.
Illustratively, the first object model is a "dining table" model, and the models in the second model list may include a "dining table" model, a "tableware" model, and the like.
Illustratively, the first object model is a "refrigerator" model, and the models in the second model list may include a "pop can" model, a "fruit" model, and so on.
According to the embodiment of the application, according to the scene information in the scene model, a first model list which can be directly put into the scene model can be extracted from the object model library, and the first object model in the first model list is randomly extracted. According to object type information of the first object model, a second model list in the object model library can be extracted, and a second object model in the second model list is randomly extracted, so that the object model can be extracted in the object model library in a multistage manner, the extracted first object model placed in the scene model and the second object model placed in the first object model can be ensured to conform to common sense, and rationality of a simulation scene corresponding to the training scene file is ensured while generalization random capability of the generated training scene file is improved.
As shown in fig. 3, in some embodiments, optionally, the target object model includes a first object model, the configuring the target object model into the scene model, generating the training scene file includes:
step 302, obtaining an object placement area of a scene model;
step 304, extracting a target area in the object placement area through a rule judgment algorithm according to the object category information;
at step 306, a first object model is configured to a target region.
In the embodiment of the application, a target area for placing the first object model in the placement area in the scene model is determined through a rule judgment algorithm, and the first object model is configured in the target area. The target area for placing the first object model is randomly selected from the object placing area of the scene model in a random mode, so that the generalization random capacity of the generated training scene file is further improved.
In this embodiment, the placement area is an area in which the scene model can place the first object model, the target area is at least a part of the placement area, and the target area is an area obtained by randomly screening object category information based on the first object model in the placement area, so that the target area is a proper area for placing the first object model.
Illustratively, the scene model is a house scene model, and the placement area includes a bedroom area, a kitchen area, and the like, and the target area includes a partial area in the bedroom area, a partial area in the kitchen area, and the like.
In the embodiment of the application, after the object placement area in the scene model is determined, the target area in the placement area is randomly extracted based on the category information of the first object model, so that the first object model can be randomly placed in the scene model in a reasonable range, the random capability of the generated training scene file is ensured, and the rationality of the simulation scene is ensured.
As shown in fig. 4, in some embodiments, optionally, the target object model includes a second object model, and after configuring the first object model to the target region, further includes:
step 402, obtaining information of a containing area of a first object model;
step 404, determining the configuration position and the configuration gesture of the second object model in the first object model through a random algorithm according to the accommodation area information;
step 406, configuring the second object model to the first object model according to the configuration gesture and the configuration position.
In the embodiment of the application, when the target object model includes the second object model and the first object model has been configured to the scene model, the configuration position of the second object model in the first object model and the configuration posture of the first object model in the first object model can be randomly selected according to the accommodation area information of the first object model. And placing the second object model on the first object model according to the configuration gesture and the configuration position.
In this embodiment, the accommodation area information of the first object model includes outer contour coordinates of the first object model, and the arrangement posture and the arrangement position of the second object model in the accommodation area can be determined based on the outer contour coordinates, the second object model is rotated by the arrangement posture, and the second object model is moved by the arrangement position.
Illustratively, the configuration pose includes, but is not limited to, an orientation of the second object model.
In the embodiment of the application, when the second object model is configured to the first object model, the accommodating area of the first object model is smaller, and the type of the second object model which can be placed in the accommodating area is relatively fixed, so that the configuration gesture and the configuration position of the second object model can be selected at random directly based on the information of the accommodating area, the process of configuring the second object model to the first object model is simplified, and the randomness is improved.
In some embodiments, optionally, the number of second object models is at least one; after the second object model is configured to the first object model according to the configuration posture and the configuration position, the method further comprises: and returning to execute the step of determining the configuration position and the configuration posture of the second object model in the first object model according to the accommodation area information by a random algorithm under the condition that the overlapped area exists between the at least two second object models based on the number of the second object models is at least two.
In the embodiment of the application, under the condition that the number of the second object models configured in the same first object model is multiple, whether the two second object models are overlapped or not is determined by detecting whether the two second object models are overlapped or not, interference is generated between the two second object models is determined, the arrangement positions of the two second object models in the first object model are determined to be unreasonable, the process of executing the configuration positions and the configuration postures of the random second object models is returned again, and the situation that the arrangement layout of the second object models in the built simulation scene is unreasonable in the first object model is reduced.
Illustratively, the first object model is a "dining table" model, the second object model is a "dinner plate" model, the second object model may be a "bowl" model, a "chopsticks" model, a "spoon" model, and the like.
In the embodiment of the application, after the first object model is configured for the second object model, whether the placement positions of the second object models are reasonable or not is judged by judging that the plurality of second object models are overlapped, so that interference among the plurality of second object models is avoided, and errors of the built simulation scene are avoided.
As shown in fig. 5, in some embodiments, optionally, obtaining the object model library includes:
step 502, obtaining an object model and first model information of the object model;
step 504, generating model metadata corresponding to the object model according to the first model information;
step 506, constructing an object model library based on the model metadata.
The embodiment of the application provides a process for constructing an object model library, which is used for generating model metadata corresponding to an object model based on model information of the object model after the object model is acquired. And an object model library is constructed through the model metadata, namely the model data in the object model library are stored in the form of the model metadata, so that the object model can be conveniently extracted. The object model library is constructed in the mode, so that the use of metadata corresponding to individual instances of the random model can be conveniently inquired in the running process of the scene generation algorithm.
In the embodiment of the application, the model metadata not only comprises the object model, but also comprises the first model information corresponding to the object model, so that the object models in the constructed object model library are ensured to correspond to the corresponding first model information, and the corresponding object models are conveniently searched and extracted in the constructed object model library.
In the embodiment of the present application, the first model information is model information of an object model, which includes, but is not limited to, information of an object type, an object name, etc. of the object model.
In the embodiment of the application, after the object model is acquired, the object model and the corresponding first model information thereof are stored in the object model database in the form of the model metadata, so that the object model and the corresponding metadata thereof can be conveniently extracted from the object model database in the follow-up.
In some embodiments, optionally, generating model metadata corresponding to the object model according to the first model information includes: generating a model label corresponding to the object model according to the first model information; configuring the model labels into an object model to generate model metadata; wherein the model tag comprises at least one of: name tags, category tags, sub-category tags, volume tags, anchor tags, bias tags, size tags, physics tags.
In the embodiment of the application, the model metadata not only comprises an object model, but also comprises a model label determined according to the first model information, wherein the model label comprises at least one of a name label, a category label, a volume label and a physical parameter label. Through the model labels, the corresponding object model can be quickly found in the object model library.
Illustratively, the model tags corresponding to the object model include: name label "chinese dining table abc", category label "table", subtype label "dining table".
It should be noted that, the volume label may be stored as object bounding box information corresponding to the object model, and the maximum cube volume occupied by the object model may be calculated according to the object bounding box information. The physical parameter labels comprise physical attributes of objects corresponding to the object models, so that the reality of physical interaction of the robot in the simulation scene can be ensured when the object models are configured to the simulation scene.
As shown in fig. 6, the model metadata includes name tags, type tags, sub-category tags, volume tags (bounding box tags), anchor tags, bias tags, size tags, physics tags of the object model. Wherein, include in the physical label: mass labels, inertial labels, maximum permeation speed, maximum impulse force, coefficient of restitution, coefficient of static friction, average value of coefficient of dynamic friction, randomizable range for each label.
In the embodiment of the application, the corresponding model labels are arranged in the object model, so that the generated model metadata comprises various model labels, and the matched object model can be quickly searched in the object model library through the model labels.
In some embodiments, optionally, after obtaining the object model, further comprising: calculating an object bounding box corresponding to the object model; and adjusting the object model based on the object bounding box so as to enable model parameters of the object model to be matched with preset parameters. According to the embodiment of the application, the object models in the object model library are different in source, the model parameters of the object models from different sources are different, and the process of automatically configuring the object models into the scene model can be realized by unifying the model parameters of the object models from different sources, so that the manual operation required in the process of configuring the object models into the scene model is reduced.
Illustratively, the expression of the object bounding box is as follows:
X aabb ={x min y min z min x max y max z max };
wherein X is aabb For coordinate information of object bounding box, x min For the minimum coordinate value of the object bounding box on the x axis of the self coordinate system, y min Z, the minimum coordinate value of the object bounding box on the y-axis of the self coordinate system min For the minimum coordinate value of the z-axis of the object bounding box in the self coordinate system, x max For the maximum coordinate value of the object bounding box on the x axis of the self coordinate system, y max For maximum coordinate value of object bounding box in y-axis of self coordinate system, z max Is the maximum coordinate value of the object bounding box in the z-axis of the self coordinate system.
In this embodiment, by traversing each model vertex in the object model to obtain the vertex coordinates corresponding to each model vertex, the maximum coordinate values and the minimum coordinate values in the x-axis, y-axis, and z-axis directions in the modeling coordinate system of the object model can be determined by the vertex coordinates of the model vertices, and the object bounding box can be constructed by the obtained maximum coordinate values and the obtained minimum coordinate values.
In the embodiment of the application, the object bounding box corresponding to the object model can be automatically constructed by acquiring the vertex coordinates of the model vertices of the object model, so that the process of configuring the object model to the scene model is further simplified.
In the embodiment of the application, the unification of the model parameters is carried out on the object models with different sources to be matched with the preset parameters, so that the model parameters in the object model library are relatively unified, the extracted object models are conveniently configured to the scene model subsequently, and the process of automatically configuring the object models to the scene model can be realized.
In some embodiments, optionally, the model parameters include a model construction origin, the preset parameters include a target position of the construction origin, and adjusting the object model based on the object bounding box includes: and moving the construction origin of the object model to the target position based on the coordinate information of the object bounding box.
In the embodiment of the application, the construction origins of the object models from different sources are different, and unified processing is required for the different construction origins in order to facilitate the subsequent automatic placement of the object models into the scene model.
In this embodiment, the coordinate information of the object bounding box is a three-dimensional coordinate under its own coordinate system, which is the reference coordinate system of the origin of the construction of the object model. The construction origins of different object models can be processed to the target position in a unified way through the coordinate information of the object bounding box and the coordinate information of the target position.
Illustratively, the target location of the build origin of the object model is the geometric center of the object bounding box, typically requiring each vertex v in the object model i =[x i y i z i ] T All are processed according to the following relation:
wherein v is i Is the vertex of the object model, x i For the coordinates of the vertices of the object model in the x-axis, y i Coordinates of vertices of the object model in the y-axis, z i Coordinates in the z-axis of the vertices of the object model, where x is max Is the maximum coordinate value at the x-axis, x min Is the minimum coordinate value at the x-axis, y max Is the maximum coordinate value at the y-axis, y min Z is the minimum coordinate value at the y-axis max Is the maximum coordinate value at the z-axis, z min Is the minimum coordinate value at the z-axis.
In the embodiment of the application, the coordinate information of the object bounding box of the object model and the target position of the construction origin are obtained, so that the construction origin of the object model is uniformly moved to the target position, each object model in the object model library can be under the same construction origin, and the subsequent process of configuring the object model to the scene model is facilitated.
In some embodiments, optionally, the model parameters include scaling, the preset parameters include preset scaling, and adjusting the at least two object models based on the at least two object bounding boxes includes: and adjusting the scaling of the object model to a preset proportion based on the coordinate information of the object bounding box and the preset size parameter.
In the embodiment of the application, the scaling ratios of the object models from different sources are different, and in order to facilitate the subsequent automatic placement of the object models into the scene model, the different scaling ratios need to be subjected to unified processing to be preset ratios.
In this embodiment, the preset size parameter may be a size parameter in one or more dimensions.
Illustratively, the preset dimensional parameters include a preset height, a preset width, a preset length, and the like.
In this embodiment, the scaling of each object model is set by the coordinate information of the object bounding box corresponding to the plurality of object models and the preset size parameter.
Illustratively, if the unit of the preset size parameter is cm and the average height of the object model in the same category is 15 cm, the scaling of the object model in the same category is adjusted by the following relation:
s z =15/(z max -z min );
wherein s is z For the adjusted scale, 15 is the average height, z, of the object model of that class max For maximum coordinate value of object model in z-axis, z min Is the minimum coordinate value of the object model in the z-axis.
In the embodiment of the application, the scaling of the object model is adjusted to be the preset scale according to the coordinate information of the object bounding box of the object model and the preset size parameter, so that the scaling of the object model is unified, and the subsequent process of configuring the object model to the scene model is facilitated.
In some embodiments, considering the effect of lighting conditions on the computer vision-based perception algorithm, optionally, configuring the target object model into the scene model, before generating the training scene file, further comprises: configuring light source parameters for the scene model through a random algorithm; wherein the light source parameters include at least one of: light source location, light source size, light source luminance, light source color.
In an embodiment of the present application, after the target object model is configured to the scene model, the method further includes: the scene model is configured with the light source parameters including any one of the light source position, the light source size, the light source brightness and the light source color, and the rendering of illumination and shadow is realized in the robot training scene through a real-time ray tracing algorithm, so that the reality of the training simulation scene is improved.
In some embodiments, optionally, configuring the target object model into the scene model, after generating the training scene file, further includes: acquiring a target object model configured into the scene model and second model information of the scene model; generating scene metadata of the training scene based on the second model information; wherein the second model information includes at least one of: location information, category information, name information, size information, and posture information.
In the embodiment of the application, after the target object model is configured to the scene model to generate the training scene file, the scene metadata is constructed by reading the second model information of the scene model and the target object model, the scene metadata can reflect the position relation among the models in the training scene, the size parameters of the models and the like, any model in the training scene file can be rapidly positioned through the scene metadata, and the subsequent automatic replacement of the model in the training scene file is facilitated.
As shown in fig. 7, the scene metadata includes the names of rooms in the scene model, such as living rooms, kitchens, and the like, and each room has an object model tag and a building model tag. The object model tags indicate the type, number, size, positioning information, etc. of the object models in the room. The object model labels comprise furniture model labels, door and window model labels and the like. Illustratively, the door and window model labels can indicate the size of the door and window. The building model label is determined according to second model information of the scene model. Building model labels include wall model labels, floor model labels, corner model labels for roofs, for describing the outline of the whole room.
The door and window model labels are recorded, automatic replacement can be conveniently carried out on the door and window in the simulation scene of the same house, and a training environment is provided for tasks such as opening and closing the door and window.
Illustratively, the primary menu in the configuration file is a room name, such as: living room, kitchen, bedroom, washroom, dining room, etc., the secondary menu is the corresponding parameter of each room, for example: furniture type, door and window size, house wall, house floor, house ceiling, room height, etc.
In one embodiment according to the present application, as shown in fig. 8, a training scenario generation method is provided, including:
step 802, importing a three-dimensional house type grid, and rendering a house type model according to random materials of building structure labels;
the house type model is the scene model in any embodiment.
Step 804, determining each room range according to the ground grid, inquiring a database, randomly selecting a furniture model corresponding to the room type, and importing the furniture model into a room;
wherein each room range is scene information of the house model in any one of the above embodiments, and the furniture model is the first object model in any one of the above embodiments.
Step 806, after a furniture model of a certain type is randomly obtained, adjusting the coordinates of the furniture in a room according to the object bounding box, the anchor point and the bias of the furniture;
the furniture of a certain type is the first object model in any embodiment, after the first object model reaches the scene model, the position of the first object model in a room is adjusted according to the object bounding box and the anchor point of the first object model and corresponding bias parameters, the anchor point is updated according to the construction origin of the first object model, and the bias parameters are used for carrying out bias processing on the initial construction origin when the construction origin is updated.
Step 808, reading the structured random file, and analyzing the carrier model and the types and the quantity of random articles to be placed;
the carrier model is the first object model in any embodiment, and the random object to be placed is the second object model in any embodiment. The structured random file is a structured stored model file.
Step 810, retrieving an item list meeting the conditions of the items to be added from a model database, and randomly selecting an item model from the list;
wherein the model database is the object model database in any of the above embodiments.
Step 812, randomly selecting the position and orientation of the object according to the placement area of the carrier model;
wherein the carrier model is the first object model in any embodiment, the object position is the configuration position in any embodiment, and the orientation is the configuration posture in any embodiment;
step 814, determining whether the article is in the carrier region, and if not, returning to step 812, and if yes, executing step 816;
step 816, judging whether the article interferes with other articles on the carrier, if so, returning to the step 812, and if not, executing step 818;
Step 818, adding the item to a carrier-bearing item list;
wherein the carrier bearing article list is a list in the scene metadata in any of the above embodiments.
Step 820, adding an item collision volume, a random mass, and a random centroid;
wherein the collision volume, random mass and random centroid of the article are physical parameters configured for the model in the training scene file, respectively.
Step 822, calculating an inertial tensor according to the random mass and centroid;
step 824, generate a simulation scenario, and assign a unique identifier, waiting for invocation.
According to the technical scheme, when the scene model is acquired, scene information corresponding to the scene model is acquired, and a target object model matched with the scene information is randomly selected from an object model library according to a random algorithm. The training scene file with strong generalization randomness can be obtained by combining the randomly extracted target object model with the scene model.
In one embodiment according to the present application, as shown in fig. 9, a training scenario generating apparatus 900 is provided, including:
an obtaining module 902, configured to obtain an object model library;
the determining module 904 is configured to determine, according to scene information of the scene model, a target object model in the object model library by using a domain stochastic algorithm, where the target object model is matched with the scene information;
A configuration module 906, configured to configure the target object model into the scene model, and generate a training scene file.
The embodiment of the application provides a training scene generating device which is used for building a simulation training scene of a robot. By acquiring the object model library and analyzing scene information in the scene model, a target object model matched with the object model in the object model library can be determined, wherein the target object model is a model searched in the object model library based on a random algorithm. And configuring the target object model into a scene model to generate a training scene file, searching the target object model in an object model library through a random algorithm, combining the searched target object model with the scene model to generate a corresponding training scene file, and improving the generalization randomness of the training scene file.
In this embodiment, object models of different sources are stored in the object model library, and the object model library constructed by the object models of different sources enables a plurality of object models of different sources to be selected when constructing the training scene file.
In this embodiment, the scene model is the model required in building the training scene file, which is the carrier of the remaining object models. The scene information includes information pre-stored in the scene model including, but not limited to, scene category information, scene size information, scene function information, and the like. Through the scene information, the relevant information of the object models required to be placed in the scene model can be determined, so that a plurality of object models which can be placed in the scene model in the object model library can be determined, and the target object model is extracted from the plurality of object models which are found based on the scene information through a random algorithm.
In the embodiment of the application, when a scene model is acquired, scene information corresponding to the scene model is acquired, and a target object model matched with the scene information is randomly selected from an object model library according to a random algorithm. The training scene file with strong generalization randomness can be obtained by combining the randomly extracted target object model with the scene model.
In some embodiments, optionally, the target object model comprises a first object model and a second object model, the first object model being capable of carrying the second object model; training scenario generation apparatus 900 further comprises:
the extraction module is used for extracting a first model list from the object model library based on scene information;
the extraction module is used for extracting the individual instance of the first object model in the first model list through a domain random algorithm;
an obtaining module 902, configured to obtain object class information of the first object model;
the extraction module is used for extracting a second model list from the object model library based on the object category information;
and the extracting module is used for extracting the individual instance of the second object model in the second model list through a domain random algorithm.
In the embodiment of the application, the first object model directly put into the scene model and the second object model put into the first object model can be found in the scene model part file based on the scene information through a domain random algorithm. And randomly extracting a first object model matched with the scene information, and randomly extracting a second object model matched with the object type information of the first object model, so that a large number of object models in the scene model are extracted through a random algorithm, and the generalization randomness of the finally generated training scene file is further improved.
According to the embodiment of the application, according to the scene information in the scene model, a first model list which can be directly put into the scene model can be extracted from the object model library, and the first object model in the first model list is randomly extracted. According to object type information of the first object model, a second model list in the object model library can be extracted, and a second object model in the second model list is randomly extracted, so that the object model can be extracted in the object model library in a multistage manner, the extracted first object model placed in the scene model and the second object model placed in the first object model can be ensured to conform to common sense, and rationality of a simulation scene corresponding to the training scene file is ensured while generalization random capability of the generated training scene file is improved.
In some embodiments, optionally, an acquiring module 902 is configured to acquire an object placement area of the scene model;
the extraction module is used for extracting a target area in the object placement area through a rule judgment algorithm according to the object category information;
a configuration module 906 is configured to configure the first object model to the target area.
In the embodiment of the application, a target area for placing the first object model in the placement area in the scene model is determined through a rule judgment algorithm, and the first object model is configured in the target area. The target area for placing the first object model is randomly selected from the object placing area of the scene model in a random mode, so that the generalization random capacity of the generated training scene file is further improved.
In the embodiment of the application, after the object placement area in the scene model is determined, the target area in the placement area is randomly extracted based on the category information of the first object model, so that the first object model can be randomly placed in the scene model in a reasonable range, the random capability of the generated training scene file is ensured, and the rationality of the simulation scene is ensured.
In some embodiments, optionally, an acquiring module 902 is configured to acquire containment area information of the first object model;
a determining module 904, configured to determine, according to the accommodation area information, a configuration position and a configuration posture of the second object model in the first object model through a random algorithm;
a configuration module 906 configured to configure the second object model to the first object model according to the configuration pose and the configuration position.
In the embodiment of the application, when the target object model includes the second object model and the first object model has been configured to the scene model, the configuration position of the second object model in the first object model and the configuration posture of the first object model in the first object model can be randomly selected according to the accommodation area information of the first object model. And placing the second object model on the first object model according to the configuration gesture and the configuration position.
In the embodiment of the application, when the second object model is configured to the first object model, the accommodating area of the first object model is smaller, and the type of the second object model which can be placed in the accommodating area is relatively fixed, so that the configuration gesture and the configuration position of the second object model can be selected at random directly based on the information of the accommodating area, the process of configuring the second object model to the first object model is simplified, and the randomness is improved.
In some embodiments, optionally, the number of second object models is at least one;
training scenario generation apparatus 900 further comprises:
and the execution module is used for returning to execute the step of determining the configuration position and the configuration gesture of the second object model in the first object model through a random algorithm according to the accommodation area information under the condition that the number of the second object models is at least two and an overlapped area exists between the at least two second object models.
In the embodiment of the application, after the first object model is configured for the second object model, whether the placement positions of the second object models are reasonable or not is judged by judging that the plurality of second object models are overlapped, so that interference among the plurality of second object models is avoided, and errors of the built simulation scene are avoided.
In some embodiments, optionally, an obtaining module 902 is configured to obtain the object model, and first model information of the object model;
training scenario generation apparatus 900 comprises:
the generation module is used for generating model metadata corresponding to the object model according to the first model information;
and the construction module is used for constructing an object model library based on the model metadata.
In the embodiment of the application, after the object model is acquired, the object model and the corresponding first model information thereof are stored in the object model library in the form of the model metadata, so that the object model can be conveniently extracted from the object model library later.
In some embodiments, optionally, a generating module is configured to generate a model tag corresponding to the object model according to the first model information;
the generating module is used for configuring the model label into the object model to generate model metadata; wherein the model tag comprises at least one of: name tags, category tags, sub-category tags, volume tags, anchor tags, bias tags, size tags, physics tags.
In the embodiment of the application, the corresponding model labels are arranged in the object model, so that the generated model metadata comprises various model labels, and the matched object model can be quickly searched in the object model library through the model labels.
In some embodiments, optionally, the training scenario generating apparatus 900 further includes:
the construction module is used for calculating an object bounding box corresponding to the object model;
and the adjusting module is used for adjusting the object model based on the object bounding box so as to enable the model parameters of the object model to be matched with the preset parameters.
In the embodiment of the application, the unification of the model parameters is carried out on the object models with different sources to be matched with the preset parameters, so that the model parameters in the object model library are relatively unified, the extracted object models are conveniently configured to the scene model subsequently, and the process of automatically configuring the object models to the scene model can be realized.
In some embodiments, optionally, the model parameters include a model construction origin, the preset parameters include a target position of the construction origin, and the training scene generating apparatus 900 further includes:
and the moving module is used for moving the construction origin of the object model to the target position based on the coordinate information of the object bounding box.
In the embodiment of the application, the construction origins of the object models from different sources are different, and unified processing is required for the different construction origins in order to facilitate the subsequent automatic placement of the object models into the scene model.
In the embodiment of the application, the coordinate information of the object bounding box of the object model and the target position of the construction origin are obtained, so that the construction origins of a plurality of object models are uniformly moved to the target position, each object model in the object model library can be under the same construction origin, and the subsequent process of configuring the object model to the scene model is facilitated.
In some embodiments, optionally, the model parameter includes a scaling ratio, the preset parameter includes a preset ratio, and the training scene generating device 900 further includes:
and the adjusting module is used for adjusting the scaling ratio of the object model to a preset ratio based on the coordinate information of the object bounding box and the preset size parameter.
In the embodiment of the application, the scaling scales of the object models from different sources are different, and unified processing is required for the different scaling scales in order to facilitate the subsequent automatic placement of the object models into the scene model.
In the embodiment of the application, the scaling of the object model is adjusted to be the preset scale according to the coordinate information of the object bounding box of the object model and the preset size parameter, so that the scaling of the object model is unified, and the subsequent process of configuring the object model to the scene model is facilitated.
In some embodiments, optionally, a configuration module 906 for configuring the light source parameters for the scene model by a random algorithm; wherein the light source parameters include at least one of: light source location, light source size, light source luminance, light source color.
In an embodiment of the present application, after the target object model is configured to the scene model, the method further includes: the scene model is configured with the light source parameters including any one of the light source position, the light source size, the light source brightness and the light source color, and the rendering of illumination and shadow is realized in the robot training scene through a real-time ray tracing algorithm, so that the reality of the training simulation scene is improved.
In some embodiments, optionally, an obtaining module 902 is configured to obtain the target object model configured into the scene model, and second model information of the scene model;
the generating module is used for generating scene metadata of the training scene based on the second model information; wherein the second model information includes at least one of: location information, category information, name information, size information, and posture information.
In the embodiment of the application, after the target object model is configured to the scene model to generate the training scene file, the scene metadata is constructed by reading the second model information of the scene model and the target object model, the scene metadata can reflect the position relation among the models in the training scene, the size parameters of the models and the like, any model in the training scene file can be rapidly positioned through the scene metadata, and the subsequent automatic replacement of the model in the training scene file is facilitated.
In one embodiment according to the present application, as shown in fig. 10, there is provided a training scene generating apparatus 1000 including: a processor 1002 and a memory 1004, the memory 1004 having stored therein programs or instructions; the processor 1002 executes programs or instructions stored in the memory 1004 to implement the steps of the training scenario generation method in any of the above embodiments, so that all the beneficial technical effects of the training scenario generation method in any of the above embodiments are provided, and will not be described in detail herein.
In one embodiment according to the present application, a readable storage medium is provided, on which a program or an instruction is stored, which when executed by a processor, implements the steps of the training scenario generation method as in any of the embodiments described above. Therefore, the training scene generation method in any of the above embodiments has all the beneficial technical effects, and will not be described in detail herein.
In an embodiment according to the present application, a computer program product is provided, which when executed by a processor, implements the steps of the training scenario generation method in any of the foregoing embodiments, so that all the beneficial technical effects of the training scenario generation method in any of the foregoing embodiments are provided, and will not be described in detail herein.
In one embodiment according to the present application, as shown in fig. 11, an electronic device 1100 is provided, comprising: the training scenario generation apparatus 1000 in any of the embodiments described above, and/or the readable storage medium 1102 in any of the embodiments described above, and/or the computer program product 1104 in any of the embodiments described above, thus has all the technical advantages of the training scenario generation apparatus 1000 in any of the embodiments described above, and/or the readable storage medium 1102 in any of the embodiments described above, and/or the computer program product 1104 in any of the embodiments described above, and will not be described in detail herein.
It is to be understood that in the claims, specification and drawings of the present application, the term "plurality" means two or more, and unless otherwise explicitly defined, the orientation or positional relationship indicated by the terms "upper", "lower", etc. are based on the orientation or positional relationship shown in the drawings, only for the convenience of describing the present application and making the description process easier, and not for the purpose of indicating or implying that the apparatus or element in question must have the particular orientation described, be constructed and operated in the particular orientation, so that these descriptions should not be construed as limiting the present application; the terms "connected," "mounted," "secured," and the like are to be construed broadly, and may be, for example, a fixed connection between a plurality of objects, a removable connection between a plurality of objects, or an integral connection; the objects may be directly connected to each other or indirectly connected to each other through an intermediate medium. The specific meaning of the terms in the present application can be understood in detail from the above data by those of ordinary skill in the art.
In the claims, specification, and drawings of the present application, the descriptions of the terms "one embodiment," "some embodiments," "particular embodiments," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in the embodiment or example of the present application. In the claims, specification and drawings of the present application, the schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above is only a preferred embodiment of the present application, and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (17)
1. A training scenario generation method, comprising:
obtaining an object model library;
determining a target object model in the object model library through a domain random algorithm according to scene information of a scene model, wherein the target object model is matched with the scene information;
And configuring the target object model into the scene model to generate a training scene file.
2. The training scenario generation method of claim 1, wherein the target object model comprises a first object model and a second object model, the first object model being capable of carrying the second object model;
the determining the target object model in the object model library according to the scene information of the scene model through a domain stochastic algorithm comprises the following steps:
extracting a first model list from the object model library based on the scene information;
extracting individual instances of the first object model in the first model list through a domain random algorithm;
acquiring object category information of the first object model;
extracting a second model list from the object model library based on the object class information;
individual instances of the second object model in the second model list are extracted by a domain stochastic algorithm.
3. The training scene generation method of claim 2, wherein the target object model comprises the first object model, wherein the configuring the target object model into the scene model generates the training scene file comprises:
Acquiring an object placement area of the scene model;
extracting a target area in the object placement area through a rule judgment algorithm according to the object category information;
the first object model is configured to the target region.
4. The training scenario generation method of claim 3, wherein the target object model comprises the second object model, and wherein after the first object model is configured to the target region, further comprising:
acquiring accommodation area information of the first object model;
determining the configuration position and the configuration posture of the second object model in the first object model through a random algorithm according to the accommodation area information;
and configuring the second object model to the first object model according to the configuration posture and the configuration position.
5. The training scenario generation method of claim 4, wherein the number of second object models is at least one;
after the second object model is configured to the first object model according to the configuration posture and the configuration position, the method further comprises:
and returning to execute the step of determining the configuration position and the configuration posture of the second object model in the first object model through a random algorithm according to the accommodation area information under the condition that the number of the second object models is at least two and an overlapping area exists between at least two second object models.
6. The training scenario generation method according to any one of claims 1 to 5, wherein the acquiring the object model library includes:
acquiring an object model and first model information of the object model;
generating model metadata corresponding to the object model according to the first model information;
and constructing the object model library based on the model metadata.
7. The training scenario generation method according to claim 6, wherein the generating model metadata corresponding to the object model from the first model information includes:
generating a model label corresponding to the object model according to the first model information;
configuring the model tag in the object model to generate the model metadata;
wherein the model tag comprises at least one of: name tags, category tags, sub-category tags, volume tags, anchor tags, bias tags, size tags, physics tags.
8. The training scenario generation method of claim 6, further comprising, after the obtaining the object model:
calculating an object bounding box corresponding to the object model;
And adjusting the object model based on the object bounding box so as to enable model parameters of the object model to be matched with preset parameters.
9. The training scenario generation method of claim 8, wherein the model parameters include a model construction origin, the preset parameters include a target position of the construction origin, and the adjusting the object model based on the object bounding box includes:
and moving a construction origin of the object model to the target position based on the coordinate information of the object bounding box.
10. The training scenario generation method of claim 8, wherein the model parameters include scaling, the preset parameters include preset proportions, and the adjusting the object model based on the object bounding box includes:
and adjusting the scaling of the object model to the preset proportion based on the coordinate information of the object bounding box and the preset size parameter.
11. The training scene generation method according to any one of claims 1 to 5, wherein the configuring of the target object model into the scene model, before generating the training scene file, further comprises:
Configuring light source parameters for the scene model through a random algorithm;
wherein the light source parameters include at least one of: light source location, light source size, light source luminance, light source color.
12. The training scene generation method according to any one of claims 1 to 5, wherein the configuring of the target object model into the scene model, after generating the training scene file, further comprises:
acquiring the target object model configured into the scene model and second model information of the scene model;
generating scene metadata of the training scene based on the second model information;
wherein the second model information includes at least one of: location information, category information, name information, size information, and posture information.
13. A training scene generation apparatus, comprising:
the acquisition module is used for acquiring an object model library;
the determining module is used for determining a target object model in the object model library through a domain random algorithm according to scene information of the scene model, and the target object model is matched with the scene information;
and the configuration module is used for configuring the target object model into the scene model to generate a training scene file.
14. A training scene generation apparatus, comprising:
a memory having stored thereon programs or instructions;
processor for implementing the steps of the training scenario generation method according to any one of claims 1 to 12 when executing the program or instructions.
15. A readable storage medium having stored thereon a program or instructions, which when executed by a processor, implement the steps of the training scenario generation method of any one of claims 1 to 12.
16. A computer program product, characterized in that the computer program product, when executed by a processor, implements the steps of the training scenario generation method of any one of claims 1 to 12.
17. An electronic device, comprising:
training scene generation apparatus according to claim 13 or 14; and/or
The readable storage medium of claim 15; and/or
The computer program product of claim 16.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310921901.3A CN116977787A (en) | 2023-07-25 | 2023-07-25 | Training scene generation method and device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310921901.3A CN116977787A (en) | 2023-07-25 | 2023-07-25 | Training scene generation method and device, electronic equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116977787A true CN116977787A (en) | 2023-10-31 |
Family
ID=88476220
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310921901.3A Pending CN116977787A (en) | 2023-07-25 | 2023-07-25 | Training scene generation method and device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116977787A (en) |
-
2023
- 2023-07-25 CN CN202310921901.3A patent/CN116977787A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363853B (en) | Furniture placement scheme generation method, device and equipment and storage medium | |
Yenamandra et al. | Homerobot: Open-vocabulary mobile manipulation | |
US11036695B1 (en) | Systems, methods, apparatuses, and/or interfaces for associative management of data and inference of electronic resources | |
US11194994B2 (en) | Semantic zone separation for map generation | |
CN107103638A (en) | A kind of Fast rendering method of virtual scene and model | |
CN110533723A (en) | The determination method and device of method, posture information that augmented reality is shown | |
CN108257203B (en) | Home decoration effect graph construction rendering method and platform | |
CN111643899A (en) | Virtual article display method and device, electronic equipment and storage medium | |
EP3877888A1 (en) | Generating space models from map files | |
CN113870390A (en) | Target marking processing method and device, electronic equipment and readable storage medium | |
Fernández-Chaves et al. | ViMantic, a distributed robotic architecture for semantic mapping in indoor environments | |
US11074729B2 (en) | Generating simplified map shapes | |
Mao | A Framework for LLM-based Lifelong Learning in Robot Manipulation | |
CN116977787A (en) | Training scene generation method and device, electronic equipment and readable storage medium | |
JP7287509B2 (en) | Method and apparatus for displaying item information in current space and media | |
CN110992459B (en) | Indoor scene rendering and optimizing method based on partitions | |
Duan et al. | Actionet: An interactive end-to-end platform for task-based data collection and augmentation in 3d environment | |
WO2020043942A1 (en) | Automatic generation of a virtual reality walkthrough | |
EP2993613A1 (en) | A capture system arranged to create a 3d model from a scanned scene, a method and a graphical user interface | |
Naim et al. | An algorithm for haptically rendering objects described by point clouds | |
Ng et al. | Syntable: A synthetic data generation pipeline for unseen object amodal instance segmentation of cluttered tabletop scenes | |
WO2023084280A1 (en) | Method and system for point cloud processing and viewing | |
CN116912447A (en) | Model configuration method, device, electronic equipment, program product and storage medium | |
US20040054509A1 (en) | System and method for preparing a solid model for meshing | |
Cheng et al. | Intelligent Spatial Perception by Building Hierarchical 3D Scene Graphs for Indoor Scenarios with the Help of LLMs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |