CN117197296A - Traffic road scene simulation method, electronic equipment and storage medium - Google Patents

Traffic road scene simulation method, electronic equipment and storage medium Download PDF

Info

Publication number
CN117197296A
CN117197296A CN202311014289.8A CN202311014289A CN117197296A CN 117197296 A CN117197296 A CN 117197296A CN 202311014289 A CN202311014289 A CN 202311014289A CN 117197296 A CN117197296 A CN 117197296A
Authority
CN
China
Prior art keywords
data
model
simulation
scene
simulated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311014289.8A
Other languages
Chinese (zh)
Inventor
杨彦龙
奚燕
丁佳
吴珍珍
周韦韦
林杨平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Supcon Information Industry Co Ltd
Original Assignee
Zhejiang Supcon Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Supcon Information Industry Co Ltd filed Critical Zhejiang Supcon Information Industry Co Ltd
Priority to CN202311014289.8A priority Critical patent/CN117197296A/en
Publication of CN117197296A publication Critical patent/CN117197296A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a traffic road scene simulation method, electronic equipment and a storage medium, and relates to the technical field of traffic digitization. The method comprises the following steps: based on the obtained relevant configuration information of the road scene to be simulated, a target component corresponding to the road scene to be simulated can be generated, and each operation parameter is initialized in the target component, so that the simulation of the road scene to be simulated can be executed based on the target component. By componentizing the logic codes related to the service scene, a developer can quickly generate a target component corresponding to the scene to be simulated by only providing configuration information related to the road scene to be simulated, so that the learning cost of the developer is reduced, and meanwhile, the simulation efficiency is improved. And the code assembly can meet the logic requirements of different services, and the cost of secondary development is reduced.

Description

Traffic road scene simulation method, electronic equipment and storage medium
Technical Field
The application relates to the technical field of traffic digitization, in particular to a traffic road scene simulation method, electronic equipment and a storage medium.
Background
With the continuous development of digital twin technology, the simulation of real traffic road scenes can be realized by means of real-time data, algorithm models and the like, so that the dynamic monitoring of the life cycle of road infrastructure and the accurate restoration of traffic participants on the road surface are realized, and accurate basis is provided for road traffic diagnosis and traffic management decision.
In the prior art, aiming at simulation of different scenes, a developer is required to write codes corresponding to each scene based on real model data and related configuration parameters in the scenes so as to realize simulation of traffic road scenes. This is a relatively high learning cost for the developer and a relatively low development efficiency.
Disclosure of Invention
The application aims to overcome the defects in the prior art and provide a traffic road scene simulation method, electronic equipment and a storage medium so as to improve the development efficiency of traffic scene simulation.
In order to achieve the above purpose, the technical scheme adopted by the embodiment of the application is as follows:
in a first aspect, an embodiment of the present application provides a traffic road scene simulation method, including:
acquiring relevant configuration information of a road scene to be simulated, and generating a target component corresponding to the road scene to be simulated according to the relevant configuration information; the relevant configuration information includes: interface data, business logic threshold data, intersection data, scene rendering data, model assistance data, and model file data;
initializing each basic parameter corresponding to the road scene to be simulated in the target component according to the business logic threshold data, the intersection data and the model auxiliary data;
Dynamically loading a rendering engine in the target component, and initializing the rendering engine according to the interface data and the scene rendering data;
in the target assembly, according to the model file data, the model auxiliary data and the configured target scene type, model data of each model in the road scene to be simulated are sequentially loaded and analyzed;
and acquiring simulation data in the target assembly according to the identified camera operation instruction, and generating a simulation picture of the road scene to be simulated according to the simulation data and model data of each model.
Optionally, the interface data includes: data subscription interface data, service communication connection interface data, system login interface data;
the business logic threshold data comprises: a distance threshold value between the camera and the intersection and a height threshold value between the camera and the ground;
the intersection data includes: intersection longitude and latitude data;
the scene rendering data includes: camera position data and pose data, scene light data;
the model assistance data includes: model style adjustment data, model animation data;
the model file data includes: and storing path information of each model in the road scene to be simulated.
Optionally, the sequentially loading and analyzing to obtain the model data of each model in the road scene to be simulated according to the model file data, the model auxiliary data and the configured target scene type includes:
according to the storage path information of each model and the display sequence of each model under the target scene type, loading and analyzing the original model data of each model from the storage path of each model in sequence;
according to the model auxiliary data, model style adjustment data and model animation data of each model are obtained;
and updating the original model data of each model according to the model style adjustment data and the model animation data of each model to obtain the model data of each model.
Optionally, updating the original model data of each model according to the model style adjustment data and the model animation data of each model to obtain model data of each model, including:
if the model has model style adjustment data, replacing corresponding model style data in original model data of the model by adopting the model style adjustment data;
if the model has model animation data, the model animation data is added into the original model data of the model.
Optionally, the acquiring simulation data according to the identified camera operation instruction includes:
responding to camera moving operation input through external trigger equipment, and acquiring current position information of a camera bound with the external trigger equipment;
generating a simulation data acquisition instruction according to the current position information of the camera and the business logic threshold data;
and acquiring real-time traffic data of the camera at the current position according to the simulation data acquisition instruction, and taking the real-time traffic data as the simulation data.
Optionally, the generating a simulation data acquisition instruction according to the current position information of the camera and the service logic threshold data includes:
determining a current scene type according to the current position information of the camera and a height threshold value of the camera and the ground;
determining state display parameters of each model under the scene type and the current position of the camera according to the current position information of the camera and the distance threshold value between the camera and the intersection;
and generating the simulation data acquisition instruction according to the state display parameters, wherein the simulation data acquisition instruction is used for acquiring real-time traffic data of each model under the corresponding state display parameters.
Optionally, the generating a simulation picture of the road scene to be simulated according to the simulation data and the model data of each model includes:
and carrying out model animation simulation according to model data of each model and simulation data of each model under corresponding state display parameters, and generating a simulated traffic picture of the road scene to be simulated under the scene type and the camera position.
Optionally, the performing model animation simulation according to the model data of each model and the simulation data of each model under the corresponding state display parameters to generate a simulated traffic picture of the road scene to be simulated under the scene type and the camera position includes:
model data of each model are added into a rendering queue;
and sequentially reading model data of each model from the rendering queue, and performing model animation rendering according to simulation data of each model under corresponding state display parameters so as to display the simulated traffic picture.
In a second aspect, an embodiment of the present application further provides a traffic road scene simulation apparatus, including: the device comprises a generating module, an initializing module and a loading module;
the generation module is used for acquiring relevant configuration information of the road scene to be simulated and generating a target component corresponding to the road scene to be simulated according to the relevant configuration information; the relevant configuration information includes: interface data, business logic threshold data, intersection data, scene rendering data, model assistance data, and model file data;
The initialization module is used for initializing each basic parameter corresponding to the road scene to be simulated in the target component according to the business logic threshold data, the intersection data and the model auxiliary data;
the initialization module is used for dynamically loading a rendering engine in the target component and initializing the rendering engine according to the interface data and the scene rendering data;
the loading module is used for loading and analyzing model data of each model in the road scene to be simulated in sequence in the target assembly according to the model file data, the model auxiliary data and the configured target scene type;
the generation module is used for acquiring simulation data in the target assembly according to the identified camera operation instruction, and generating a simulation picture of the road scene to be simulated according to the simulation data and model data of each model.
Optionally, the interface data includes: data subscription interface data, service communication connection interface data, system login interface data;
the business logic threshold data comprises: a distance threshold value between the camera and the intersection and a height threshold value between the camera and the ground;
The intersection data includes: intersection longitude and latitude data;
the scene rendering data includes: camera position data and pose data, scene light data;
the model assistance data includes: model style adjustment data, model animation data;
the model file data includes: and storing path information of each model in the road scene to be simulated.
Optionally, the loading module is specifically configured to sequentially load and parse raw model data of each model from a storage path of each model according to storage path information of each model and a display sequence of each model under the target scene type;
according to the model auxiliary data, model style adjustment data and model animation data of each model are obtained;
and updating the original model data of each model according to the model style adjustment data and the model animation data of each model to obtain the model data of each model.
Optionally, the loading module is specifically configured to replace, if the model has model style adjustment data, corresponding model style data in original model data of the model with the model style adjustment data;
if the model has model animation data, the model animation data is added into the original model data of the model.
Optionally, the generating module is specifically configured to obtain current position information of a camera bound to an external trigger device in response to a camera moving operation input through the external trigger device;
generating a simulation data acquisition instruction according to the current position information of the camera and the business logic threshold data;
and acquiring real-time traffic data of the camera at the current position according to the simulation data acquisition instruction, and taking the real-time traffic data as the simulation data.
Optionally, the generating module is specifically configured to determine a current scene type according to current position information of the camera and a height threshold value of the camera and the ground;
determining state display parameters of each model under the scene type and the current position of the camera according to the current position information of the camera and the distance threshold value between the camera and the intersection;
and generating the simulation data acquisition instruction according to the state display parameters, wherein the simulation data acquisition instruction is used for acquiring real-time traffic data of each model under the corresponding state display parameters.
Optionally, the generating module is specifically configured to perform model animation simulation according to model data of each model and simulation data of each model under corresponding state display parameters, and generate a simulated traffic picture of the road scene to be simulated under the scene type and the camera position.
Optionally, the generating module is specifically configured to add model data of each model to a rendering queue;
and sequentially reading model data of each model from the rendering queue, and performing model animation rendering according to simulation data of each model under corresponding state display parameters so as to display the simulated traffic picture.
In a third aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor and the storage medium are communicated through the bus, and the processor executes the machine-readable instructions to realize the traffic road scene simulation method as provided in the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the traffic road scene simulation method as provided in the first aspect.
The beneficial effects of the application are as follows:
the application provides a traffic road scene simulation method, electronic equipment and a storage medium, which can generate a target component corresponding to a road scene to be simulated based on acquired relevant configuration information of the road scene to be simulated, and initialize various operation parameters in the target component, so that simulation of the road scene to be simulated can be executed based on the target component. By componentizing the logic codes related to the service scene, a developer can quickly generate a target component corresponding to the scene to be simulated by only providing configuration information related to the road scene to be simulated, so that the learning cost of the developer is reduced, and meanwhile, the simulation efficiency is improved. And the code assembly can meet the logic requirements of different services, and the cost of secondary development is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a traffic road scene simulation method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of another traffic road scene simulation method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of another traffic road scene simulation method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of another traffic road scene simulation method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of another traffic road scene simulation method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a traffic road scene simulation device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for the purpose of illustration and description only and are not intended to limit the scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flow diagrams by those skilled in the art under the direction of the present disclosure.
In addition, the described embodiments are only some, but not all, embodiments of the application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that the term "comprising" will be used in embodiments of the application to indicate the presence of the features stated hereafter, but not to exclude the addition of other features.
Along with the continuous expansion of traffic service, digital traffic service in various places throughout the country is continuously increased, digital traffic road scene simulation is performed, basic service logic is the same for different places, only the change of resources and data is changeable, fewer developers completely grasp the development flow, and based on the problems, the scheme provides a componentized thought for the development of the digital road project once again at present, and the abstract packaging of the development project in the past can help a service end to rapidly integrate the simulation of a three-dimensional scene through a componentized example, so that the learning cost of front-end developers is reduced, and the development efficiency is improved.
Fig. 1 is a schematic flow chart of a traffic road scene simulation method according to an embodiment of the present application; the execution subject of the method may be a terminal device or a server in communication with the terminal device, as shown in fig. 1, and the method may include:
s101, acquiring relevant configuration information of a road scene to be simulated, and generating a target component corresponding to the road scene to be simulated according to the relevant configuration information; the relevant configuration information includes: interface data, business logic threshold data, intersection data, scene rendering data, model assistance data, and model file data.
According to the scheme, based on the componentization technology, the related code program for carrying out road scene simulation is subjected to abstract encapsulation to generate componentization examples, so that when a developer carries out road scene simulation, the componentization examples can be called by inputting configuration information related to a current scene to be simulated to simulate the road scene, the quick response under different business logic requirements is met, the development efficiency is improved, and meanwhile, the development cost is reduced.
Alternatively, the developer may input relevant configuration information of the road scene to be simulated according to the simulation requirement, where the road to be simulated may be any segment (focused) of road in the real road scene, or a road under any region. A developer can input relevant configuration information of a road scene to be simulated in real time according to simulation requirements; or the developer can generate a data file containing the relevant configuration information of the road scene to be simulated in advance, and when the simulation is performed, the data file can be read first to obtain the relevant configuration information of the road scene to be simulated.
Wherein the relevant configuration information may include, but is not limited to: interface data, business logic threshold data, intersection data, scene rendering data, model assistance data, and model file data.
The interface data can refer to interface information which the component has to provide for appointing, and different parameter calls can be carried out through the interface data so as to execute corresponding business logic.
The business logic threshold data is used to agree on execution trigger conditions for some business logic.
The intersection data may refer to related data of an intersection to be displayed in the simulation scene.
Scene rendering data may refer to background data related to an analog scene, such as scene lighting, sky, etc.
The model assistance data may refer to some assistance data related to the model in the scene to be simulated, i.e. some ancillary property data of the model.
Model file data may refer to data of the model itself, i.e. three-dimensional model data.
For different road scenes to be simulated, because the scenes have differences, the relevant configuration information of the scenes can be different, and then the target components corresponding to the road scenes to be simulated can be generated based on the relevant configuration information of the road scenes to be simulated. Thus, the road scene to be simulated can be simulated based on the target component.
S102, initializing each basic parameter corresponding to the road scene to be simulated in the target component according to the business logic threshold data, the intersection data and the model auxiliary data.
In this implementation, the generated target component may be related initialized based on the related configuration information, and the initialization may be understood as initializing default parameters in the target component, and replacing the default parameters with parameters in the input related configuration information, so that the target component may operate service logic related to the road scene to be simulated according to the configuration information.
Optionally, each basic parameter may be initialized in the target component according to the business logic threshold data, the intersection data and the model auxiliary data, where the basic parameters may include: intersection data, business logic thresholds, and model assistance data. For example: after initializing the service logic threshold data, the target component can execute the service trigger logic according to the configured service logic threshold data, and when the service logic trigger condition meets the service logic threshold, the target component triggers to execute the corresponding service logic.
S103, dynamically loading the rendering engine in the target component, and initializing the rendering engine according to the interface data and the scene rendering data.
In some embodiments, simulation-related business logic may be executed by the target component, while the execution results of the target component may be visually presented by the rendering engine.
The rendering engine used in this embodiment may be an Easy3D engine, which is an open source library for three-dimensional modeling, geometric processing, and rendering. The rendering engine is independent of the target component, with two benefits, first: the components are small, the codes are light, and the business logic is separated from the rendering logic; second,: component updates, or engine updates, can be made independently, without affecting each other.
Optionally, the rendering engine may be initialized based on the interface data and the scene rendering data. The target component firstly asynchronously loads the rendering engine, after loading is completed, the engine is initialized, interface data and scene rendering data are transmitted to the engine, the engine is initialized to analyze Token, access the remote service, verify whether the remote service is legal or not, and return to the Easy3D instance after the initialization is completed.
And S104, in the target component, according to the model file data, the model auxiliary data and the configured target scene type, sequentially loading and analyzing to obtain model data of each model in the road scene to be simulated.
In this embodiment, the model is initialized in the target component according to the model file data and the model auxiliary data, where the model may be designed and created by a designer in advance according to the road scene to be simulated and exists under the target path.
In the model initialization process, the target component can determine the loading and sorting conditions of the models according to the configured target scene types, so that model data of each model in the road scene to be simulated are sequentially loaded and analyzed according to the sorting results.
S105, acquiring simulation data in the target assembly according to the identified camera operation instruction, and generating a simulation picture of the road scene to be simulated according to the simulation data and model data of each model.
Due to the different camera positions, the type of road scene observed by the user and the state of the model in the scene are also different, for example: when the camera is positioned higher from the ground, the camera is in a full-cable mode, and at the moment, all road information can be displayed, but specific vehicle and pedestrian information in the road cannot be displayed. And when the camera is closer to the ground, the camera is in a high-definition intersection mode, and at the moment, the detail display can be performed on the high-definition intersection, and the information such as vehicle flow information, license plates, signal lamps and the like can be displayed.
Optionally, corresponding simulation data can be obtained in the target component according to the identified camera operation instruction, so that simulation of model animation is performed according to the simulation data and model data of each model, and a simulation picture of a road scene to be simulated is generated.
In summary, according to the traffic road scene simulation method provided by the embodiment, based on the obtained relevant configuration information of the road scene to be simulated, the target component corresponding to the road scene to be simulated can be generated, and the initialization of each operation parameter is performed in the target component, so that the simulation of the road scene to be simulated can be executed based on the target component. By componentizing the logic codes related to the service scene, a developer can quickly generate a target component corresponding to the scene to be simulated by only providing configuration information related to the road scene to be simulated, so that the learning cost of the developer is reduced, and meanwhile, the simulation efficiency is improved. And the code assembly can meet the logic requirements of different services, and the cost of secondary development is reduced.
The present embodiment describes each of the above related configuration information:
the interface data may include: data subscription interface data, service communication connection interface data, and system login interface data.
The interface mainly comprises: logging in, subscribing to interface, unsubscribing to interface, websocket service interface. The parameter transmission and return values of each interface are agreed, and the component can simulate different data according to the calling condition of the interface and the data returned by websocket.
The relevant code of the interface data may be exemplified as follows:
the business logic threshold data includes: a distance threshold value between the camera and the intersection and a height threshold value between the camera and the ground; the height threshold value of the camera and the ground is used for judging the current scene type, and the distance threshold value of the camera and the intersection is used for initiating and subscribing real-time traffic data (simulation data) corresponding to the scene type.
The intersection data includes: intersection longitude and latitude data.
The configuration information is that according to the service scene, the intersection data to be displayed in the scene is provided with a configuration, and the configuration or intelligent subscription of the data and triggering of related events are provided for the component, so that the complex simulation control logic is packaged in the component.
Exemplary codes for intersection data are shown below:
it should be noted that, when the configuration of the intersection data is performed, the configuration of the information may be performed according to the data normalization process, that is, the configuration of the intersection data may be performed according to a specified data format.
For example: great track of/(torch)
300056:{
position:[-390.084,10.918,205.2122]
target:[-401.708,0,191.691]
center:[-399.494,0,197.163]
Where 300056 refers to the number of the intersection, position refers to the position when the camera looks at, target refers to the position point where the camera looks at, and center refers to the center point position of the intersection.
The scene rendering data includes: camera position data and pose data, and scene lighting data.
The camera position and pose data may refer to initial position and initial pose data of the camera. The scene light data refers to the number of lights set based on the display color requirement of the scene, and of course, the scene rendering data can also comprise scene background colors, scene rendering auxiliary tool parameters and the like.
Exemplary codes are shown below:
the antialiasing is turned on by default to make the rendered road route smoother, and the purpose of the color correction is to make the rendered color finer.
The model assistance data includes: model style adjustment data, model animation data.
The model auxiliary data mainly aims at providing models for scene simulation and relevant characteristics of each model, and the target component can load the model scenes according to the model auxiliary data and model file data, load the models in sequence in the same scene, adjust materials after the models are initialized and the like.
And in some cases, the effect of the design of the model design software is inconsistent with the effect actually loaded into the development environment, and for better adaptation, the model style can be correspondingly adjusted by inputting style adjustment information through the model style adjustment data in the model auxiliary data.
Exemplary codes are as follows:
/>
exemplary codes for the pattern style adjustment data are as follows
The model animation data is mainly used when the target component needs to change the colors of the vehicle bodies of various vehicle types, and exemplary codes are shown as follows:
meanwhile, because the colors collected by the related equipment of the intersection can only be coarse-grained, a user is required to define specific color values for different color systems according to the configuration of lights and the like in the scene of the user, such as
The model file data includes: and storing path information of each model in the road scene to be simulated.
In some embodiments, before acquiring the input relevant configuration information of the road scene to be simulated in step S101, the method may further include: according to the real scene information of the road scene to be simulated, creating each model in the road scene to be simulated, generating the identification of each model according to a preset naming mode, and storing each model and the identification of each model in a preset path.
In order to facilitate component recognition, when the model is created, the model can be created according to a preset specification.
It is assumed that the simulation scenario may include: models such as ground, buildings, clouds, mountain bodies, trunk roads, sky, high-definition intersections, road equipment and vehicles can be created according to the following table, so that the corresponding models can be accurately determined by the components according to the identifiers.
Based on each model which is established, model optimization can be performed, and vertexes, materials and the like can be combined to obtain the optimal models. And storing the created models under a preset path so as to load and analyze the models according to the path in the later period.
FIG. 2 is a schematic flow chart of another traffic road scene simulation method according to an embodiment of the present application; optionally, in step S104, according to the model file data, the model auxiliary data and the configured target scene type, the model data of each model in the road scene to be simulated is loaded and analyzed in sequence, which may include:
s201, loading and analyzing the original model data of each model from the storage path of each model in sequence according to the storage path information of each model and the display sequence of each model under the target scene type.
Optionally, in this embodiment, the types of the simulated scene may be classified into three categories, including: full-cable scene, high-brightness road scene and high-definition intersection scene. Because the models to be displayed under different scene types are different, and the display detail levels of the models are different, the display orders of the models are also different. For example, when navigation software is used, the detail level of the images which can be displayed is different along with continuous scaling of the page, and the display sequence of the bottommost detail is later, and on the basis of displaying the vehicle model, when the detail is further displayed, the passenger model in the vehicle can be displayed, that is, the display sequence of each model is different.
Then, according to the display sequence of each model under the target scene type and the storage path information of each model, the original model data of each model is loaded and analyzed from the storage path of each model in turn.
S202, according to the model auxiliary data, model style adjustment data and model animation data of each model are obtained.
Alternatively, the model style adjustment data and the model animation data of each model may be obtained by the model auxiliary data, but not all models have the corresponding model style adjustment data and model animation data, and when the models have the auxiliary data, the model data may be updated based on the auxiliary data.
And S203, updating the original model data of each model according to the model style adjustment data and the model animation data of each model to obtain the model data of each model.
Optionally, the original model data of each model may be updated based on the model style adjustment data and the model animation data of each model, so as to obtain model data of each model.
Corresponding model data in the original model data of the model can be modified by the model style adjustment data, and corresponding animation data can be added to the original model data by the model animation data.
Optionally, in step S203, updating the original model data of each model according to the model style adjustment data and the model animation data of each model to obtain model data of each model may include: and if the model has model style adjustment data, replacing corresponding model style data in the original model data of the model by adopting the model style adjustment data.
In some embodiments, for any model, if model style adjustment data for that model can be obtained, i.e., model style adjustment data for that model exists, the original model data can be modified with the obtained model style adjustment data.
For example: for a vehicle model, assuming that there is model style adjustment data that the vehicle body color is red and the original vehicle body color of the vehicle model is black, the vehicle body color of the vehicle model may be modified to red.
If the model has model animation data, the model animation data is added to the original model data of the model.
If the model has animation data, whether the animation data is played or not can be further determined according to the configuration information, and the animation data is added into the original model data.
The model animation data here is typically set for some background model in the scene, for example: the animation data of the cloud in the scene can be a drifting mode of the cloud; or the tree in the scene, the animation data thereof may be a rocking mode of the tree, etc.
FIG. 3 is a schematic flow chart of another traffic road scene simulation method according to an embodiment of the present application; optionally, in step S105, acquiring simulation data according to the identified camera operation instruction may include:
s301, responding to camera moving operation input through an external trigger device, and acquiring current position information of a camera bound with the external trigger device.
Alternatively, a binding operation of the external trigger device and the camera may be pre-established, and the target component may bind various operation events for the camera, where the camera may be understood as a viewing angle of a user, and the viewing angle of the user when viewing the analog screen.
The external trigger device may refer to a mouse, and the mouse is bound to the camera, so that when the mouse moves, the position of the center point of the camera changes, thereby changing the current position of the camera.
After the binding is finished, the target component can identify the input operation of the external trigger device and acquire the current position information of the camera bound by the external trigger device.
S302, generating a simulation data acquisition instruction according to the current position information of the camera and the business logic threshold value data.
Optionally, according to the identified current position information of the camera and the service logic threshold value data in the related configuration information, a simulation data acquisition instruction can be generated.
Because the types of the simulated scenes and the state display of each model in the scenes are different when the cameras are at different positions, the simulation data to be acquired are different, and then the simulation data acquisition instruction required under the service logic to be triggered can be determined according to the current position information of the identified cameras and the service logic threshold value data.
S303, acquiring real-time traffic data of the camera at the current position according to the simulation data acquisition instruction, and taking the real-time traffic data as simulation data.
In some embodiments, the target component may establish a communication connection with an external server based on interface service address data defined in the interface data, so that the simulation data acquisition instruction may be sent to the server, and the server may feed back the collected real-time traffic data to the target component according to the simulation data acquisition instruction.
FIG. 4 is a schematic flow chart of another traffic road scene simulation method according to an embodiment of the present application; optionally, in step S302, generating the simulation data acquisition instruction according to the current location information of the camera and the service logic threshold data may include:
S401, determining the current scene type according to the current position information of the camera and the height threshold value of the camera and the ground.
Alternatively, the height threshold of the camera and the ground is not limited to a fixed value, and may be a threshold interval; when the height between the current position of the camera and the ground is greater than or equal to the upper limit of the height threshold, the camera can be considered to be far away from the ground, and the corresponding scene type can be a full-cable mode; when the height between the current position of the camera and the ground is between the upper limit of the height threshold and the lower limit of the height threshold, the camera is considered to be moderate from the ground, and the corresponding scene type can be a high-brightness road mode; when the height between the current position of the camera and the ground is less than or equal to the lower limit of the height threshold, the camera is considered to be closer to the ground, and the corresponding scene type can be a high-definition intersection mode.
In the full-cable mode, all roads in the camera view range can be displayed, in the high-brightness road mode, the roads in the camera view range can be highlighted, when the camera position is further moved, the high-definition road junction mode can be switched to so as to display a specific highlighted road junction in a high-definition road junction mode, and in the high-definition road junction mode, the traffic condition of the road junction can be displayed in detail, including but not limited to: display of traffic lights, display of road signs, display of license plates of vehicles, display of dynamic running pictures of vehicles, and the like.
S402, determining state display parameters of each model under the scene type and the current position of the camera according to the current position information of the camera and the distance threshold value between the camera and the intersection.
And according to the current position information of the camera and the distance threshold value between the camera and the intersection, an acquisition instruction of the intersection simulation data can be initiated. Similarly, the distance threshold between the camera and the intersection can be a threshold interval, and when the current position of the camera is far from or near to the intersection, the model state display parameters to be displayed on the intersection can be different.
For example, when the camera is close to the intersection, road signs, intersection traffic flow, people flow, vehicle license plates and the like on the road need to be displayed.
S403, generating a simulation data acquisition instruction according to the state display parameters, wherein the simulation data acquisition instruction is used for acquiring real-time traffic data of each model under the corresponding state display parameters.
Then, based on the state display parameters of each model under the determined scene type, a simulation data acquisition instruction can be generated, so that based on the simulation data acquisition instruction, the server can be triggered to acquire the real traffic data of each model under the scene type as the simulation data to be acquired.
Optionally, in step S105, generating a simulation picture of the road scene to be simulated according to the simulation data and the model data of each model may include: and carrying out model animation simulation according to the model data of each model and the simulation data of each model under the corresponding state display parameters, and generating a simulated traffic picture of the road scene to be simulated under the scene type and the camera position.
Alternatively, based on the acquired simulation data of each model, generation of model animation may be triggered, thereby generating a simulated traffic picture.
For example: the simulation data of the vehicle model a is that the vehicle a is positioned at the xx position and runs to the east, and the license plate is xxx, and then the animation of the vehicle model a can be generated according to the simulation data of the vehicle model a. The generated simulated traffic picture can see that the vehicle a is traveling in the set direction at the set position, which is the actual running data of the vehicle a under the intersection.
In some embodiments, when creating the vehicle model, each type of vehicle model may be created according to the vehicle type, but in the actual simulation process, the vehicles running in real time under the intersection may include a plurality of vehicles, and then, according to the acquired simulation data and the created vehicle model, the vehicle model may be copied to obtain more vehicle models, and further, the copied vehicle models may be color-adjusted, and the auxiliary vehicle model may be applied to the animation simulation.
In some embodiments, the server may further perform position marking for each intersection, and when the vehicle position exceeds the marked position, no information of the vehicle is collected, at this time, notification information may be sent to the target component, so that the target component triggers that the vehicle is deleted from the simulation screen and is not displayed any more. By the operation, on one hand, the reality of a simulated picture (in an actual scene, when a vehicle exits an intersection, the simulated picture is not displayed in the picture) can be improved, and on the other hand, a part of idle resources can be released, so that the processing efficiency is improved.
FIG. 5 is a schematic flow chart of another traffic road scene simulation method according to an embodiment of the present application; optionally, in the step, according to the model data of each model and the simulation data of each model under the corresponding state display parameters, performing model animation simulation to generate a simulated traffic picture of the road scene to be simulated under the scene type and the camera position, the method may include:
s501, adding model data of each model into a rendering queue.
In some embodiments, after the model data of each model is obtained, the model data of each model may be added to the rendering queue first, where the model data may be sequentially added to the rendering queue according to the loading order of each model.
S502, sequentially reading model data of each model from a rendering queue, and performing model animation rendering according to simulation data of each model under corresponding state display parameters so as to display a simulated traffic picture.
Optionally, the target component may sequentially read model data of each model from the rendering queue, perform model animation rendering according to simulation data of each model pushed by the server under the corresponding state display parameters, generate a simulated traffic picture of the traffic scene to be simulated under the corresponding scene type and camera position, and display the simulated traffic picture on the display device.
According to the scheme, more complex logics are multiplexed, some complex technologies are packaged in the assembly, the cost of secondary development is reduced, the development efficiency is improved, and the quality of the whole digital road project is improved through continuous iteration.
In summary, according to the traffic road scene simulation method provided by the embodiment of the application, based on the acquired relevant configuration information of the road scene to be simulated, the target component corresponding to the road scene to be simulated can be generated, and the initialization of each operation parameter is performed in the target component, so that the simulation of the road scene to be simulated can be executed based on the target component. By componentizing the logic codes related to the service scene, a developer can quickly generate a target component corresponding to the scene to be simulated by only providing configuration information related to the road scene to be simulated, so that the learning cost of the developer is reduced, and meanwhile, the simulation efficiency is improved. And the code assembly can meet the logic requirements of different services, and the cost of secondary development is reduced.
The following describes a device, equipment, a storage medium, etc. for executing the traffic road scene simulation method provided by the present application, and specific implementation processes and technical effects thereof are referred to above, and are not described in detail below.
Fig. 6 is a schematic diagram of a traffic road scene simulation device according to an embodiment of the present application, where functions implemented by the traffic road scene simulation device correspond to steps executed by the method. The apparatus may be understood as a terminal device or a server as described above, or a processor of a server, or may be understood as a component, which is independent from the server or the processor and performs the functions of the present application under the control of the server, as shown in fig. 6, where the apparatus may include: a generating module 610, an initializing module 620, and a loading module 630;
the generating module 610 is configured to obtain relevant configuration information of a road scene to be simulated, and generate a target component corresponding to the road scene to be simulated according to the relevant configuration information; the relevant configuration information includes: interface data, business logic threshold data, intersection data, scene rendering data, model assistance data, and model file data;
the initialization module 620 is configured to initialize each basic parameter corresponding to the road scene to be simulated in the target component according to the business logic threshold data, the intersection data and the model auxiliary data;
An initialization module 620, configured to dynamically load the rendering engine in the target component, and initialize the rendering engine according to the interface data and the scene rendering data;
the loading module 630 is configured to sequentially load and parse model data of each model in the road scene to be simulated according to the model file data, the model auxiliary data and the configured target scene type in the target component;
the generating module 610 is configured to obtain simulation data according to the identified camera operation instruction in the target component, and generate a simulation image of the road scene to be simulated according to the simulation data and model data of each model.
Optionally, the interface data includes: data subscription interface data, service communication connection interface data, system login interface data;
the business logic threshold data includes: a distance threshold value between the camera and the intersection and a height threshold value between the camera and the ground;
the intersection data includes: intersection longitude and latitude data;
the scene rendering data includes: camera position data and pose data, scene light data;
the model assistance data includes: model style adjustment data, model animation data;
the model file data includes: and storing path information of each model in the road scene to be simulated.
Optionally, the loading module 630 is specifically configured to sequentially load and parse raw model data of each model from a storage path of each model according to storage path information of each model and a display sequence of each model under a target scene type;
according to the model auxiliary data, model style adjustment data and model animation data of each model are obtained;
and updating the original model data of each model according to the model style adjustment data and the model animation data of each model to obtain the model data of each model.
Optionally, the loading module 630 is specifically configured to replace, if the model has model style adjustment data, corresponding model style data in original model data of the model with the model style adjustment data;
if the model has model animation data, the model animation data is added to the original model data of the model.
Optionally, the generating module 610 is specifically configured to obtain current position information of the camera bound to the external triggering device in response to a camera movement operation input through the external triggering device;
generating a simulation data acquisition instruction according to the current position information of the camera and the business logic threshold data;
And acquiring real-time traffic data of the camera at the current position according to the simulation data acquisition instruction, and taking the real-time traffic data as simulation data.
Optionally, the generating module 610 is specifically configured to determine a current scene type according to current location information of the camera and a height threshold of the camera and the ground;
determining the scene type and the state display parameters of each model under the current position of the camera according to the current position information of the camera and the distance threshold value between the camera and the intersection;
and generating a simulation data acquisition instruction according to the state display parameters, wherein the simulation data acquisition instruction is used for acquiring real-time traffic data of each model under the corresponding state display parameters.
Optionally, the generating module 610 is specifically configured to perform model animation simulation according to model data of each model and simulation data of each model under corresponding state display parameters, so as to generate a simulated traffic picture of the road scene to be simulated under the scene type and the camera position.
Optionally, the generating module 610 is specifically configured to add model data of each model to the rendering queue;
and sequentially reading model data of each model from the rendering queue, and performing model animation rendering according to the simulation data of each model under the corresponding state display parameters so as to display a simulated traffic picture.
The foregoing apparatus is used for executing the method provided in the foregoing embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (digital singnal processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
The modules may be connected or communicate with each other via wired or wireless connections. The wired connection may include a metal cable, optical cable, hybrid cable, or the like, or any combination thereof. The wireless connection may include a connection through a LAN, WAN, bluetooth, zigBee, or NFC, or any combination thereof. Two or more modules may be combined into a single module, and any one module may be divided into two or more units. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the method embodiments, and are not repeated in the present disclosure.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the device may be a computing device with a data processing function.
The device comprises: a processor 801, and a storage medium 802.
The storage medium 802 is used to store a program, and the processor 801 calls the program stored in the storage medium 802 to execute the above-described method embodiment. The specific implementation manner and the technical effect are similar, and are not repeated here.
In which the storage medium 802 stores program code that, when executed by the processor 801, causes the processor 801 to perform various steps in the traffic road scene simulation method according to various exemplary embodiments of the application described in the section of the description of the exemplary method described above.
The processor 801 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
The storage medium 802 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The storage medium may include at least one type of storage medium, and may include, for example, flash Memory, a hard disk, a multimedia card, a card-type storage medium, a random access storage medium (Random Access Memory, RAM), a static random access storage medium (Static Random Access Memory, SRAM), a programmable Read-Only storage medium (Programmable Read Only Memory, PROM), a Read-Only storage medium (ROM), a charged erasable programmable Read-Only storage medium (Electrically Erasable Programmable Read-Only storage), a magnetic storage medium, a magnetic disk, an optical disk, and the like. A storage medium is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The storage medium 802 of the present application may also be circuitry or any other device capable of implementing a storage function for storing program instructions and/or data.
Optionally, the present application also provides a program product, such as a computer readable storage medium, comprising a program for performing the above-described method embodiments when being executed by a processor.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods according to the embodiments of the application. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.

Claims (10)

1. A traffic road scene simulation method, characterized by comprising:
acquiring relevant configuration information of a road scene to be simulated, and generating a target component corresponding to the road scene to be simulated according to the relevant configuration information; the relevant configuration information includes: interface data, business logic threshold data, intersection data, scene rendering data, model assistance data, and model file data;
initializing each basic parameter corresponding to the road scene to be simulated in the target component according to the business logic threshold data, the intersection data and the model auxiliary data;
Dynamically loading a rendering engine in the target component, and initializing the rendering engine according to the interface data and the scene rendering data;
in the target assembly, according to the model file data, the model auxiliary data and the configured target scene type, model data of each model in the road scene to be simulated are sequentially loaded and analyzed;
and acquiring simulation data in the target assembly according to the identified camera operation instruction, and generating a simulation picture of the road scene to be simulated according to the simulation data and model data of each model.
2. The method of claim 1, wherein the interface data comprises: data subscription interface data, service communication connection interface data, system login interface data;
the business logic threshold data comprises: a distance threshold value between the camera and the intersection and a height threshold value between the camera and the ground;
the intersection data includes: intersection longitude and latitude data;
the scene rendering data includes: camera position data and pose data, scene light data;
the model assistance data includes: model style adjustment data, model animation data;
The model file data includes: and storing path information of each model in the road scene to be simulated.
3. The method according to claim 2, wherein sequentially loading and analyzing model data of each model in the road scene to be simulated according to the model file data, the model auxiliary data and the configured target scene type includes:
according to the storage path information of each model and the display sequence of each model under the target scene type, loading and analyzing the original model data of each model from the storage path of each model in sequence;
according to the model auxiliary data, model style adjustment data and model animation data of each model are obtained;
and updating the original model data of each model according to the model style adjustment data and the model animation data of each model to obtain the model data of each model.
4. A method according to claim 3, wherein updating the original model data of each model according to the model style adjustment data and the model animation data of each model to obtain the model data of each model comprises:
if the model has model style adjustment data, replacing corresponding model style data in original model data of the model by adopting the model style adjustment data;
If the model has model animation data, the model animation data is added into the original model data of the model.
5. The method of claim 2, wherein the obtaining simulation data based on the identified camera operation instructions comprises:
responding to camera moving operation input through external trigger equipment, and acquiring current position information of a camera bound with the external trigger equipment;
generating a simulation data acquisition instruction according to the current position information of the camera and the business logic threshold data;
and acquiring real-time traffic data of the camera at the current position according to the simulation data acquisition instruction, and taking the real-time traffic data as the simulation data.
6. The method of claim 5, wherein generating the simulated data acquisition instructions from the current location information of the camera and the business logic threshold data comprises:
determining a current scene type according to the current position information of the camera and a height threshold value of the camera and the ground;
determining state display parameters of each model under the scene type and the current position of the camera according to the current position information of the camera and the distance threshold value between the camera and the intersection;
And generating the simulation data acquisition instruction according to the state display parameters, wherein the simulation data acquisition instruction is used for acquiring real-time traffic data of each model under the corresponding state display parameters.
7. The method of claim 6, wherein generating a simulated view of the road scene to be simulated based on the simulation data and model data for each model comprises:
and carrying out model animation simulation according to model data of each model and simulation data of each model under corresponding state display parameters, and generating a simulated traffic picture of the road scene to be simulated under the scene type and the camera position.
8. The method according to claim 7, wherein the performing model animation simulation according to the model data of each model and the simulation data of each model under the corresponding state display parameters to generate the simulated traffic picture of the road scene to be simulated under the scene type and the camera position comprises:
model data of each model are added into a rendering queue;
and sequentially reading model data of each model from the rendering queue, and performing model animation rendering according to simulation data of each model under corresponding state display parameters so as to display the simulated traffic picture.
9. An electronic device, comprising: a processor, a storage medium, and a bus, the storage medium storing program instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is running, the processor executing the program instructions to implement the traffic road scene simulation method according to any one of claims 1 to 8.
10. A computer readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, is adapted to carry out the traffic road scene simulation method according to any of claims 1 to 8.
CN202311014289.8A 2023-08-11 2023-08-11 Traffic road scene simulation method, electronic equipment and storage medium Pending CN117197296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311014289.8A CN117197296A (en) 2023-08-11 2023-08-11 Traffic road scene simulation method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311014289.8A CN117197296A (en) 2023-08-11 2023-08-11 Traffic road scene simulation method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117197296A true CN117197296A (en) 2023-12-08

Family

ID=88986020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311014289.8A Pending CN117197296A (en) 2023-08-11 2023-08-11 Traffic road scene simulation method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117197296A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117724647A (en) * 2024-02-07 2024-03-19 杭州海康威视数字技术股份有限公司 Information configuration display method and device, electronic equipment and machine-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117724647A (en) * 2024-02-07 2024-03-19 杭州海康威视数字技术股份有限公司 Information configuration display method and device, electronic equipment and machine-readable storage medium
CN117724647B (en) * 2024-02-07 2024-06-04 杭州海康威视数字技术股份有限公司 Information configuration display method and device, electronic equipment and machine-readable storage medium

Similar Documents

Publication Publication Date Title
US20190377981A1 (en) System and Method for Generating Simulated Scenes from Open Map Data for Machine Learning
CN106951583B (en) Method for virtually arranging monitoring cameras on construction site based on BIM technology
US10096158B2 (en) Method and system for virtual sensor data generation with depth ground truth annotation
CN110494895A (en) Use the Rendering operations of sparse volume data
CN110807219B (en) Three-dimensional simulation modeling method, device, terminal and storage medium for road network
Talwar et al. Evaluating validity of synthetic data in perception tasks for autonomous vehicles
CN117197296A (en) Traffic road scene simulation method, electronic equipment and storage medium
CN113204897B (en) Scene modeling method, device, medium and equipment for parallel mine simulation system
CN113763231A (en) Model generation method, image perspective determination device, image perspective determination equipment and medium
CN114462233A (en) Microscopic traffic simulation method, computer device and storage medium
CN115908716A (en) Virtual scene light rendering method and device, storage medium and electronic equipment
CN114082191A (en) Game engine visual interaction method based on BIM design
CN116206068B (en) Three-dimensional driving scene generation and construction method and device based on real data set
CN115937352A (en) Mine scene simulation method, mine scene simulation system, electronic equipment and storage medium
CN117237511A (en) Cloud image processing method, cloud image processing device, computer and readable storage medium
CN112712098A (en) Image data processing method and device
CN116962612A (en) Video processing method, device, equipment and storage medium applied to simulation system
CN112906241B (en) Mining area automatic driving simulation model construction method, mining area automatic driving simulation model construction device, mining area automatic driving simulation model construction medium and electronic equipment
Galazka et al. CiThruS2: Open-source photorealistic 3D framework for driving and traffic simulation in real time
US20230196619A1 (en) Validation of virtual camera models
CN115857685A (en) Perception algorithm data closed-loop method and related device
CN114185320B (en) Evaluation method, device and system for unmanned system cluster and storage medium
KR102410870B1 (en) A drone simulator system with realistic images
CN114307158A (en) Three-dimensional virtual scene data generation method and device, storage medium and terminal
Koduri et al. AUREATE: An Augmented Reality Test Environment for Realistic Simulations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination