CN115657494A - Virtual object simulation method, device, equipment and storage medium - Google Patents

Virtual object simulation method, device, equipment and storage medium Download PDF

Info

Publication number
CN115657494A
CN115657494A CN202211107262.9A CN202211107262A CN115657494A CN 115657494 A CN115657494 A CN 115657494A CN 202211107262 A CN202211107262 A CN 202211107262A CN 115657494 A CN115657494 A CN 115657494A
Authority
CN
China
Prior art keywords
information
determining
acquisition information
virtual
virtual environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211107262.9A
Other languages
Chinese (zh)
Inventor
王希同
胡旷
苏菲菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN202211107262.9A priority Critical patent/CN115657494A/en
Publication of CN115657494A publication Critical patent/CN115657494A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a virtual object simulation method, a virtual object simulation device, virtual object simulation equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the field of automatic driving. The specific implementation scheme is as follows: determining a virtual environment scene; determining acquisition information aiming at a target object, wherein the acquisition information is acquired by utilizing a sensor to the target object in a preset real scene; determining perception information aiming at a target object according to the virtual environment scene, the acquisition information and the position information of the reference object; generating a virtual object corresponding to the target object in the virtual environment scene according to the perception information; wherein, the position information of the reference object comprises: the method comprises the following steps of firstly, positioning a reference object in a virtual environment scene coordinate system, and secondly, positioning the reference object in a sensor coordinate system.

Description

Virtual object simulation method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technology, and more particularly, to the field of automatic driving, and more particularly, to a virtual object simulation method, a virtual object simulation apparatus, an electronic device, a storage medium, a computer program product, and a virtual object simulation device.
Background
Perception is an important component of unmanned driving, and through perception, an unmanned vehicle can determine whether an obstacle, an obstacle type, an obstacle distance and other information exist in the surrounding environment and make different decisions according to the information.
Disclosure of Invention
The present disclosure provides a virtual object simulation method, a virtual object simulation apparatus, an electronic device, a storage medium, a computer program product, and a virtual object simulation device.
According to an aspect of the present disclosure, there is provided a virtual object simulation method, including: determining a virtual environment scene; determining acquisition information aiming at a target object, wherein the acquisition information is acquired by utilizing a sensor to the target object in a preset real scene; determining perception information aiming at a target object according to the virtual environment scene, the acquisition information and the position information of the reference object; generating a virtual object corresponding to the target object in the virtual environment scene according to the perception information; wherein, the position information of the reference object comprises: the first position of the reference object in the virtual environment scene coordinate system and the second position of the reference object in the sensor coordinate system.
According to another aspect of the present disclosure, a virtual object simulation apparatus is provided, which includes a first determining module, a second determining module, a third determining module, and a first generating module. The first determining module is used for determining a virtual environment scene; the second determination module is used for determining acquisition information aiming at the target object, and the acquisition information is acquired by utilizing the sensor to acquire the target object in a preset real scene; the third determining module is used for determining perception information aiming at the target object according to the virtual environment scene, the acquisition information and the position information of the reference object; the first generation module is used for generating a virtual object corresponding to the target object in the virtual environment scene according to the perception information; wherein, the position information of the reference object comprises: the method comprises the following steps of firstly, positioning a reference object in a virtual environment scene coordinate system, and secondly, positioning the reference object in a sensor coordinate system.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods provided by the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method provided by the present disclosure.
According to another aspect of the disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method provided by the disclosure.
According to another aspect of the present disclosure, a virtual object simulation device is provided, which includes the electronic device provided in the present disclosure and a sensing mechanism, where the sensing mechanism includes a rack and at least one sensor mounted on the rack, and each sensor is used to detect a target object located around the rack, so as to obtain collected information.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of an application scenario of a virtual object simulation method and apparatus according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram of a virtual object simulation method in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a virtual object simulation method according to an embodiment of the present disclosure;
FIG. 4 is a block schematic diagram of a virtual object simulation device according to an embodiment of the present disclosure;
FIG. 5 is a block schematic diagram of a virtual object simulation apparatus according to an embodiment of the present disclosure; and
fig. 6 is a block diagram of an electronic device for implementing a virtual object simulation method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
To develop the perception algorithm, a large amount of data acquisition, scene testing, simulation and other work is required to be performed for various different scenes.
In some technical schemes, a virtual simulation scene can be built through a game engine, and the sensor data and the scene data are acquired through a building mode in the game engine. It can be understood that, with this solution, there are some differences between the data obtained from the game engine and the actual data, which leads to poor verification.
The embodiment of the disclosure provides a simulation method of sensing hardware in a ring, and compared with a simulation method of a game engine, the simulation method of sensing hardware in a ring has the advantages that the acquired information is acquired by acquiring real objects in a preset real scene by using a sensor, and the acquired information is more real, so that a relatively real simulation effect is obtained.
The technical solutions provided in the present disclosure will be described in detail below with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a schematic view of an application scenario of a virtual object simulation method and apparatus according to an embodiment of the present disclosure.
It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include a sensor 101, an electronic device 102, and a display device 103.
The sensor 101 may include at least one of a laser radar, a millimeter wave radar, an ultrasonic radar, an image capture device, and an inertial navigation device. During the simulation, the sensors 101 may be mounted on a rack, which may be placed in a predetermined real scene, for example, indoors.
The electronic device 102 is electrically connected to the sensor 101, so as to acquire the acquisition information of the sensor 101, for example, acquire image data, point cloud data, and the like acquired by the sensor 101.
The display device 103 may display a human-computer interface, thereby facilitating user operation. The human-computer interaction interface can display a virtual environment scene, and the virtual environment scene can correspond to map data. In addition, a virtual reference object corresponding to the reference object, for example, the frame or the sensor 101, may be generated in the virtual environment scene, and a virtual object corresponding to a target object, for example, an obstacle such as a guardrail, a pedestrian, or a vehicle, may be generated in the virtual environment scene.
During the simulation, the electronic device 102 may determine a first position of the reference object in the virtual environment scene according to the virtual environment information, the map data, and the virtual reference object. The electronic device 102 may receive the acquisition information such as image data and point cloud data acquired by the sensor 101, and process the acquisition information by using a sensing algorithm or other methods to obtain a position of the target object in the coordinate system of the sensor 101, and may also obtain a second position of the reference object in the coordinate system of the sensor 101. The electronic device 102 may process the acquisition information, the position information of the reference object, and the virtual environment scene using a perception algorithm to obtain perception information, and generate a virtual object based on the perception information.
It should be noted that the virtual object simulation method provided by the embodiments of the present disclosure may be generally executed by the electronic device 102. Accordingly, the virtual object simulation apparatus provided by the embodiments of the present disclosure may be generally disposed in the electronic device 102. The virtual object simulation method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster different from the electronic device 102 and capable of communicating with the sensor 101, the display device 103 and/or the electronic device 102. Accordingly, the virtual object simulation apparatus provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the electronic device 102 and capable of communicating with the sensor 101, the display apparatus 103 and/or the electronic device 102.
It should be understood that the number of sensors, electronics and display devices in fig. 1 is merely illustrative. There may be any number of sensors, electronics, and display devices, as desired for implementation.
FIG. 2 is a schematic flow chart diagram of a virtual object simulation method in accordance with an embodiment of the present disclosure.
As shown in FIG. 2, the virtual object simulation method 200 may include operations S210 through S240.
In operation S210, a virtual environment scene is determined.
For example, a virtual environment scene can be set up in advance according to requirements, scenes such as unprotected left turn, urban roads, crossroads and the like can be set up, and the virtual environment scene can include road information. The virtual environment scene to be used may be selected from the virtual environment scenes that are set up in advance by using a scene selector, for example, after receiving a selection instruction, the scene selector selects the virtual environment scene to be used according to a scene identifier included in the selection instruction.
In operation S220, acquisition information for a target object, which is acquired by using a sensor for the target object in a predetermined real scene, is determined.
For example, target objects may be disposed around the sensing mechanism, which may be used to simulate obstacles such as pedestrians, vehicles, guardrails, cones, and the like. For example, if the sensing mechanism is installed in a laboratory where simulation tests are performed, it is possible to place dummies of various shapes in the laboratory and use the dummies as target objects.
For example, the sensing mechanism may be used to collect the target object, so as to obtain the collected information.
For example, the sensing mechanism may include a frame and at least one sensor. The racks may be placed in a predetermined real world scenario, such as in a laboratory where simulation tests are conducted. The sensor is mounted to the frame, and the sensor may include at least one of a laser radar, a millimeter wave radar, an ultrasonic radar, an image capture device, and an inertial navigation device. The sensor is used for detecting target objects around the rack so as to obtain collected information, for example, the information collected by the image collecting device comprises image data, and the information collected by the laser radar comprises point cloud data. The sensor may also send the collected information to the electronic device.
In operation S230, perception information for the target object is determined according to the virtual environment scene, the acquisition information, and the position information of the reference object. The position information of the reference object includes: the method comprises the following steps of firstly, positioning a reference object in a virtual environment scene coordinate system, and secondly, positioning the reference object in a sensor coordinate system.
For example, the virtual environment scene may correspond to map data, and the map data may be used to determine position information in the map data corresponding to any position in the virtual environment scene.
For example, the sensing mechanism may be used as a reference, or a sensor or a frame in the sensing mechanism may be used as a reference. A first position of a reference object in a virtual environment scene coordinate system may be determined in conjunction with map data. The relative position relationship between the target object and the sensor and the position of the target object in the sensor coordinate system can be determined according to the acquisition information of the sensor, so that the second position of the reference object in the sensor coordinate system is obtained.
For example, the perception information may include the position, velocity, orientation, etc. of the target object, and the position may be represented using a bounding box. It should be understood that, since the sensor coordinate system and the virtual environment scene coordinate system may perform coordinate conversion, each kind of perception information may include first information in the sensor coordinate system and second information in the virtual environment scene coordinate system. For example, the position of the target object may include a third position of the target object in the virtual environment scene coordinate system, and may further include a fourth position of the target object in the sensor coordinate system. In addition, in the process of determining the position of the target object, the sensing algorithm may first process image data, point cloud data, and the like acquired by the sensor to obtain a fourth position of the target object in the sensor coordinate system, and then perform coordinate transformation on the fourth position according to the sensor coordinate system and the virtual environment scene coordinate system to obtain a third position of the target object in the virtual environment scene coordinate system.
In the actual simulation process, the sensing algorithm can be used for processing the virtual environment scene, the collected information and the position information of the reference object to obtain the sensing information. The perception algorithm is an algorithm for realizing automatic driving, and the perception algorithm can process collected information so as to obtain information such as the type of a target object and the position of the target object. The embodiment of the present disclosure does not limit the perception algorithm.
It should be understood that for an autonomous vehicle, a sensor may be used to detect a GPS (global positioning system) coordinate of the vehicle, the GPS coordinate, image data collected by the sensor, point cloud data, and the like may be input to a perception algorithm, and the perception algorithm outputs perception information. In the embodiment of the present disclosure, the first position of the reference object in the virtual environment scene coordinate system may be input to the sensing algorithm instead of the GPS coordinate, and the collected information and the like may also be input to the sensing algorithm, and the sensing algorithm outputs the sensing information.
In operation S240, a virtual object corresponding to the target object is generated in the virtual environment scene according to the perception information.
For example, a virtual object may be generated at a third location in the virtual environment scene. For example, in a predetermined real scene, the target object is located 10 meters directly in front of the reference object, and in a virtual environment scene, the virtual object may be generated at 10 meters directly in front of the virtual reference object characterizing the reference object, which may include the sensing mechanism. Furthermore, the state of the virtual object may be determined according to the orientation, speed, etc. in the perception information, e.g. the target object in the real scene is oriented in the north direction, then the virtual object may be oriented in the north direction in the virtual environment scene.
According to the technical scheme provided by the embodiment of the disclosure, the in-loop simulation of the sensor hardware is realized through the virtual environment scene, the reference object position information and the acquisition information of the sensor. The acquired information is acquired by acquiring the real object in the preset real scene by using the sensor, so that the acquired information is more real, and a more real simulation effect is obtained.
In addition, after the virtual object is generated, the generated result can be displayed in a human-computer interaction interface, so that a user can conveniently and intuitively determine whether the perception algorithm is accurate or not. For example, if an indoor target object moves toward a reference object, but in a virtual environment scene, a virtual object moves away from a virtual reference object corresponding to the reference object, or the number of target objects placed indoors is different from the number of generated virtual objects, it can be determined that the sensing algorithm is low in accuracy and needs to be debugged.
In some embodiments, data may be collected using sensors, followed by online verification of the perception algorithm in a simulation scenario. Data collected by the sensor, indoor positioning and the like can also be stored as data packets, and the perception algorithm can be verified offline through the stored data packets.
According to another embodiment of the present disclosure, the method may further include the operations of: in response to receiving the generation instruction, a virtual reference object is generated in the virtual environment scene according to the generation instruction. Then, the position of the virtual reference object in the virtual environment scene is determined as a first position according to the map data corresponding to the virtual environment scene.
For example, a virtual environment scene may be presented using a human-machine interface. For example, the user may select a predetermined location in the human-computer interaction interface as desired. In response to receiving the selected predetermined location, the virtual environment scene generates a virtual reference object in the virtual environment scene that characterizes the reference object. Further, the predetermined position may include a start position and an end position, and the virtual reference object may move from the start position to the end position along a predetermined path in the virtual environment scene.
For example, the virtual environment scene has a correspondence relationship with map data, for example, a start point of one lane line in the virtual environment scene corresponds to one position information in the map data. Therefore, the position information of the virtual reference object, which is the second position, can be obtained from the map data through the position of the virtual reference object in the virtual environment scene and the corresponding relationship between the virtual environment scene and the map data. After determining the second location, the second location may also be sent to the electronic device through a subscription and publication mechanism for messages.
By adopting the technical scheme provided by the embodiment of the disclosure, the virtual reference object can be placed according to actual requirements, and the second position of the virtual reference object can affect the third position of the virtual object. Therefore, the third position of the virtual object in the virtual environment scene can be adjusted through the virtual reference object, so that the user can intuitively know the influence of the position of the virtual reference object on the virtual object.
According to another embodiment of the present disclosure, the operation of determining the acquisition information for the target object may include the following operations: in response to receiving the start instruction, a fault switch associated with the sensor is turned on, original acquisition information is acquired in a state where the fault switch is turned on, and then the acquisition information is determined according to the original acquisition information.
Illustratively, a user operates on the human-computer interaction interface, the operation can trigger a starting instruction, and the electronic device receives the starting instruction and starts the fault switch. The fault switch may be a program that is executed to simulate the effect of a sensor failure.
For example, after the fault switch is turned on, internal and external reference files of the sensor can be adjusted, so that the posture of the sensor is incorrect. For another example, data collected by the laser radar needs to be transmitted to the electronic device through the ethernet, and the ethernet can be disconnected after the fault switch is turned on, so that the effect of disconnecting the laser radar is achieved. For another example, the electronic device needs to identify and read image data collected by the camera using the rule file and the soft link file, and the rule file and/or the soft link file can be deleted after the fault switch is turned on, so that the camera is in fault.
The embodiment of the disclosure simulates the effect of the sensor failure through software, and further tests the influence of the sensor failure on the sensing algorithm.
In some embodiments, all of the raw acquisition information may be determined as acquisition information.
In other embodiments, a portion of the total original acquisition information may be deleted, and the remaining original acquisition information may be determined as acquisition information.
In one example, the number of raw acquisition information is at least one, each raw acquisition information including a first timestamp indicating an acquisition time. The above method may further comprise the operations of: a second timestamp corresponding to each of the raw acquisition information is determined, the second timestamp indicating a time at which the raw acquisition information was received by the electronic device. Next, determining whether the first time stamp and the second time stamp meet a first preset condition or not for each original acquisition information, and if so, deleting the original acquisition information; and if not, determining the original acquisition information as the acquisition information. The first predetermined condition may be that a duration of an interval between the first time stamp and the second time stamp is equal to or greater than a first predetermined duration, which may be, for example, 10 milliseconds.
For example, the collected data detected by the sensor needs to be sent to the electronic device for data processing, and the time reference of the sensor and the time reference of the electronic device may be the same. And then comparing the first time stamp of the data acquired by the sensor with the second time stamp of the data received by the electronic equipment to obtain a first time interval between the first time stamp and the second time stamp, wherein if the duration of the first interval is longer than a first preset duration, the timeliness of the data is low, and therefore the data can be not used for calculation. If the first interval duration is less than or equal to a first predetermined duration, the original acquisition information may be determined as acquisition information, and the acquisition information may be used to determine perception information.
According to the embodiment of the invention, the timeliness of the acquired information is ensured through the first time stamp and the second time stamp of the acquired information, so that the accuracy of the simulation effect is improved.
In another example, the method may further include the operations of: determining whether the first timestamp and a third timestamp corresponding to the first position meet a second preset condition, and if so, deleting the original acquisition information; and if not, determining the original acquisition information as the acquisition information. The second predetermined condition may be, for example, that a duration of an interval between the first time stamp and the third time stamp is equal to or greater than a second predetermined duration, which may be, for example, 5 milliseconds.
For example, the third timestamp and the first timestamp of the target object at the second position may be compared to obtain a second time interval therebetween. For example, in the case that the second interval duration satisfies the second predetermined condition, it may be expressed that: at a certain moment, the target object is located at a specific position in the virtual environment scene, and at the moment or a moment close to the moment, the sensor acquires the acquired data.
By limiting the duration of the second interval, the validity of the data can be ensured. For example, at a first time instant, the virtual reference object is at position a in the virtual environment scene, and the acquisition data a of the sensor at the first time instant characterizes that the separation between the target object and the reference object in the real scene (e.g. the laboratory above) is 50 meters, so that in the virtual environment scene, the virtual object can be generated at a position 50 meters apart from position a. At a second time instant, the virtual reference object is at position B in the virtual environment scene, and the acquisition data B of the sensor at the second time instant characterizes that the separation between the target object and the reference object in the real scene (e.g. the laboratory above) is 20 meters, so that in the virtual environment scene the virtual object can be generated at a position spaced 20 meters from position B. If the second predetermined condition is not considered, the virtual object may be generated at a position spaced 50 meters from the position B at the second time, and the position of the virtual object in the virtual environment scene may be inaccurate.
According to the embodiment of the invention, the timeliness of the acquired information is ensured through the first time stamp and the third time stamp of the acquired information, so that the accuracy of the simulation effect is improved.
In another example, the original acquisition information may be deleted if the first timestamp, the second timestamp satisfy a first predetermined condition, and/or the first timestamp and the third timestamp satisfy a second predetermined condition.
In another example, at least one piece of original acquisition information may be respectively transformed from a sensor coordinate system to a virtual environment scene coordinate system to obtain at least one piece of transformed acquisition information, and then it is determined whether the transformed acquisition information is outside the boundary information, and if so, the transformed acquisition information and/or the original acquisition information corresponding to the transformed acquisition information may be deleted; if not, the converted acquired information and/or the original acquired information corresponding to the converted acquired information can be determined as the acquired information.
For example, taking the example that the original acquisition information is point cloud data acquired by a laser radar, after coordinate transformation, the point cloud data outside the virtual environment scene can be filtered out, and the filtered point cloud data does not participate in the process of determining the perception information any more.
By adopting the technical scheme provided by the embodiment of the disclosure, after the acquired information exceeds the boundary of the virtual environment scene after transformation, the position of the virtual object cannot be determined based on the map data corresponding to the virtual environment scene, and the position of the virtual object is inaccurate. Therefore, the acquired information exceeding the boundary can be deleted, so that the position of the virtual object in the virtual environment scene can be accurately reflected. In addition, since a part of the collected information is deleted before the collected information is used for determining the perception information, the calculation amount of the electronic equipment for determining the perception information by using the collected information can be reduced, and the data processing efficiency is improved.
According to another embodiment of the present disclosure, the method may further include the following operations: actual information for the target object is determined, and then the accuracy of the perception information is determined according to the actual information.
For example, the actual information may represent true value information of a true category, a true position, a true orientation, and the like of the target object.
For example, the actual information of the target object may be planned in advance and then the target object may be placed based on the actual information. After the target object is placed, another perception algorithm which passes the verification is used for detecting the target object, and a perception result output by the perception algorithm is used as actual information.
For example, the actual information may be compared to the perceptual information to determine whether the perceptual information is accurate. When the sensing information is obtained by processing data by using the unverified sensing algorithm, whether the unverified sensing algorithm is accurate or not can be determined through the difference between the actual information and the sensing information, so that a user can conveniently debug the sensing algorithm according to actual requirements.
FIG. 3 is a schematic diagram of a virtual object simulation method according to an embodiment of the present disclosure.
As shown in fig. 3, in the present embodiment, a scene selector 302 may be used to select one virtual environment scene 303 from a plurality of virtual environment scenes 301, where the virtual environment scene 303 corresponds to map data. The acquisition information for the surrounding target objects may be acquired using the sensor 304. The virtual environment scene 303 may also be displayed by using a human-computer interaction interface, a virtual reference object corresponding to the reference object is generated at a predetermined position in the virtual environment scene 303, and the position of the virtual reference object in the virtual environment scene 303 is obtained, which may be used as the indoor positioning 305.
Next, the sensing information may be determined according to the virtual environment scene 303, the collected information collected by the sensor 304, and the indoor positioning 305, for example, the information is input into a sensing algorithm 306 to be verified, and the sensing algorithm 306 outputs the sensing information.
Virtual objects corresponding to the target object may then be generated in the virtual environment scene 303, resulting in a target virtual scene 307. Through this target virtual scene 307, the perception algorithm 306 can be verified.
FIG. 4 is a block diagram of a schematic structure of a virtual object simulation device according to an embodiment of the present disclosure.
As shown in fig. 4, the present disclosure also provides a virtual object simulation device 400 comprising a perception mechanism and an electronic device 401.
The sensing mechanism may include a frame and at least one sensor. The racks may be placed in a predetermined real world scenario, such as in a laboratory where simulation tests are conducted. The sensors are mounted to the gantry and may include at least one of a lidar 402, a millimeter-wave radar 403, an ultrasonic radar 404, an image acquisition device 405, and an inertial navigation device 406.
The electronic device 401 may be used to perform the virtual object simulation method described above. In some embodiments, the electronic device 401 may include a plurality of software modules, such as a scene selector, a fault switch, an indoor positioning module, and the like. In practical application, a user can set up a virtual environment scene in advance according to actual requirements, and then can select the virtual environment scene which needs to be used currently by using a scene selector. And then the indoor positioning module determines the position of the virtual reference object in the virtual environment scene by using the virtual environment scene and the map data. The effect of simulating the sensor failure can be realized by the failure switch. The method can also acquire the acquisition information of the sensor, then determine the perception information based on the position information of the reference object, the acquisition information of the sensor, the virtual environment scene, the timestamp and the like, and when the perception information is determined by using the perception algorithm, the effect of verifying the perception algorithm can be realized, so that a user can debug the perception algorithm according to the verification result.
In some embodiments, the virtual object simulation apparatus may further include a display device 407 and an input device, the display device 407 may include a display screen for presenting a human-computer interaction interface, and the input device may include a keyboard 408, a mouse 409, and the like.
In some embodiments, the virtual object simulation device may be placed indoors for use, a physical model such as a vehicle, a cone bucket, a pedestrian, and the like may be placed indoors as a target object, data of the target object is collected by a sensor, and a virtual object is generated in a virtual environment scene by using the collected data. Because the virtual object simulation equipment does not need to run on a real road, the equipment can be used as a teaching product, and students can conveniently learn the automatic driving knowledge.
In the related art, typical scene data can be collected by a collection vehicle equipped with various sensors, and then the perception effect can be verified by using the collected data of various real scenes. It can be understood that the scheme is high in cost, limited by road traffic regulations and low in operation convenience.
Compared with the technical scheme, the virtual object simulation equipment provided by the embodiment of the disclosure can be applied to an indoor environment, and a user can finish the processes of data acquisition, scene simulation, perception effect hardware-in-the-loop simulation and the like indoors, so that the research and development efficiency of the perception algorithm of the automatic driving vehicle is improved, and the problem of high data acquisition cost in simulation based on acquired data can be solved.
FIG. 5 is a block diagram of a schematic structure of a virtual object simulation apparatus according to an embodiment of the present disclosure.
As shown in FIG. 5, the virtual object simulation apparatus 500 may include a first determination module 510, a second determination module 520, a third determination module 530, and a first generation module 540.
The first determination module 510 is used to determine a virtual environment scene.
The second determining module 520 is configured to determine collecting information for the target object, where the collecting information is obtained by collecting the target object in a predetermined real scene by using a sensor.
The third determining module 530 is configured to determine perception information for the target object according to the virtual environment scene, the collected information, and the position information of the reference object. The position information of the reference object includes: the method comprises the following steps of firstly, positioning a reference object in a virtual environment scene coordinate system, and secondly, positioning the reference object in a sensor coordinate system.
The first generating module 540 is configured to generate a virtual object corresponding to the target object in the virtual environment scene according to the perception information.
According to another embodiment of the present disclosure, the second determining module includes a first obtaining sub-module, a first determining sub-module, a first deleting sub-module, and a second determining sub-module. The first acquisition sub-module is for acquiring at least one raw acquisition information for at least one target object, each raw acquisition information comprising a first timestamp indicative of an acquisition time. The first determining sub-module is configured to determine a second timestamp corresponding to each of the raw acquisition information, the second timestamp indicating a time at which the raw acquisition information was received. The first deletion submodule is configured to, for each piece of original collection information, delete the piece of original collection information from the at least one piece of original collection information in response to determining that the first timestamp, the second timestamp, and a third timestamp corresponding to the first location satisfy a predetermined condition. The second determining submodule is used for determining the residual original acquisition information as acquisition information.
According to another embodiment of the present disclosure, the predetermined condition includes at least one of: the interval duration between the first timestamp and the second timestamp is greater than or equal to a first preset duration, and the interval duration between the first timestamp and the third timestamp is greater than or equal to a second preset duration.
According to another embodiment of the present disclosure, the second determining module includes a second obtaining sub-module, a transforming sub-module, a second deleting sub-module, and a third determining sub-module. The second obtaining sub-module is used for obtaining at least one piece of original acquisition information aiming at least one target object. The transformation submodule is used for transforming at least one original acquisition information from the sensor coordinate system to the virtual environment scene coordinate system respectively to obtain at least one transformed acquisition information. The second deleting submodule is used for deleting the transformed acquired information which is positioned outside the boundary information in the at least one transformed acquired information. And the third determining submodule is used for determining the acquisition information according to the remaining converted acquisition information.
According to another embodiment of the present disclosure, the second determining module includes an opening sub-module, a third obtaining sub-module, and a fourth determining sub-module. The opening submodule is used for responding to the received opening instruction and opening a fault switch related to the sensor. And the third acquisition submodule is used for acquiring the original acquisition information in the state that the fault switch is opened. And the fourth determining submodule is used for determining the acquisition information according to the original acquisition information.
According to another embodiment of the present disclosure, the apparatus further includes a fourth determining module and a fifth determining module. The fourth determination module is for determining actual information for the target object. And the fifth determining module is used for determining the accuracy of the perception information according to the actual information.
According to another embodiment of the present disclosure, the apparatus further includes a second generating module and a sixth determining module. The second generation module is used for responding to the received generation instruction and generating the virtual reference object in the virtual environment scene according to the generation instruction. The sixth determining module is used for determining the position of the virtual reference object in the virtual environment scene as the first position according to the map data corresponding to the virtual environment scene.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
In the technical scheme of the disclosure, before the personal information of the user is acquired or collected, the authorization or the consent of the user is acquired.
According to an embodiment of the present disclosure, there is also provided an electronic device, comprising at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the virtual object simulation method described above.
According to an embodiment of the present disclosure, there is also provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to execute the above virtual object simulation method.
According to an embodiment of the present disclosure, there is also provided a computer program product, comprising a computer program, which when executed by a processor, implements the above virtual object simulation method.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, and the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 executes the respective methods and processes described above, such as the virtual object simulation method. For example, in some embodiments, the virtual object simulation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by the computing unit 601, one or more steps of the virtual object simulation method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the virtual object simulation method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (19)

1. A virtual object simulation method, comprising:
determining a virtual environment scene;
determining acquisition information aiming at a target object, wherein the acquisition information is acquired by a sensor on the target object in a preset real scene;
determining perception information aiming at the target object according to the virtual environment scene, the acquisition information and the position information of the reference object; and
generating a virtual object corresponding to the target object in the virtual environment scene according to the perception information;
wherein the position information of the reference object includes: the first position of the reference object in the virtual environment scene coordinate system and the second position of the reference object in the sensor coordinate system.
2. The method of claim 1, wherein the determining acquisition information for a target object comprises:
obtaining at least one raw acquisition information for at least one target object, each raw acquisition information comprising a first timestamp indicating an acquisition time;
determining a second timestamp corresponding to each of the original acquisition information, the second timestamp indicating a time at which the original acquisition information was received;
for each of the raw acquisition information, in response to determining that the first timestamp, the second timestamp, and a third timestamp corresponding to the first location satisfy a predetermined condition, deleting the raw acquisition information from the at least one raw acquisition information; and
and determining the residual original acquisition information as the acquisition information.
3. The method of claim 2, wherein the predetermined condition comprises:
the interval duration between the first timestamp and the second timestamp is greater than or equal to a first preset duration, or the interval duration between the first timestamp and the third timestamp is greater than or equal to a second preset duration.
4. The method of claim 1, wherein the determining acquisition information for a target object comprises:
acquiring at least one original acquisition information for at least one target object;
respectively transforming the at least one original acquisition information from the sensor coordinate system to the virtual environment scene coordinate system to obtain at least one transformed acquisition information;
deleting the transformed acquisition information outside the boundary information from the at least one transformed acquisition information; and
and determining the acquired information according to the remaining converted acquired information.
5. The method of claim 1, wherein the determining acquisition information for a target object comprises:
in response to receiving an open command, opening a fault switch associated with the sensor;
acquiring original acquisition information in the state that the fault switch is turned on; and
and determining the acquisition information according to the original acquisition information.
6. The method of claim 1, further comprising:
determining actual information for the target object; and
and determining the accuracy of the perception information according to the actual information.
7. The method of claim 1, further comprising:
in response to receiving a generation instruction, generating a virtual reference object in the virtual environment scene according to the generation instruction; and
and determining the position of the virtual reference object in the virtual environment scene as the first position according to the map data corresponding to the virtual environment scene.
8. A virtual object simulation apparatus, comprising:
a first determining module for determining a virtual environment scene;
the second determination module is used for determining acquisition information aiming at the target object, wherein the acquisition information is acquired by utilizing a sensor to the target object in a preset real scene;
the third determining module is used for determining perception information aiming at the target object according to the virtual environment scene, the acquisition information and the position information of the reference object; and
the first generation module is used for generating a virtual object corresponding to the target object in a virtual environment scene according to the perception information;
wherein the position information of the reference object includes: the first position of the reference object in the virtual environment scene coordinate system and the second position of the reference object in the sensor coordinate system.
9. The apparatus of claim 8, wherein the second determining means comprises:
a first acquisition sub-module for acquiring at least one original acquisition information for at least one target object, each original acquisition information comprising a first timestamp indicating an acquisition time;
a first determining sub-module, configured to determine a second timestamp corresponding to each piece of original acquisition information, where the second timestamp indicates a time when the original acquisition information is received;
a first deletion sub-module configured to, for each piece of original acquisition information, in response to determining that the first timestamp, the second timestamp, and a third timestamp corresponding to the first location satisfy a predetermined condition, delete the piece of original acquisition information from the at least one piece of original acquisition information; and
and the second determining submodule is used for determining the residual original acquisition information as the acquisition information.
10. The apparatus of claim 9, wherein the predetermined condition comprises:
the interval duration between the first timestamp and the second timestamp is greater than or equal to a first preset duration, or the interval duration between the first timestamp and the third timestamp is greater than or equal to a second preset duration.
11. The apparatus of claim 8, wherein the second determining means comprises:
the second acquisition submodule is used for acquiring at least one piece of original acquisition information aiming at least one target object;
the transformation submodule is used for transforming the at least one original acquisition information from the sensor coordinate system to the virtual environment scene coordinate system respectively to obtain at least one transformed acquisition information;
the second deleting submodule is used for deleting the converted acquired information which is positioned outside the boundary information in the at least one converted acquired information; and
and the third determining submodule is used for determining the acquisition information according to the remaining converted acquisition information.
12. The apparatus of claim 8, wherein the second determining means comprises:
a start sub-module for, in response to receiving a start instruction, starting a fault switch associated with the sensor;
the third acquisition submodule is used for acquiring original acquisition information in the state that the fault switch is started; and
and the fourth determining submodule is used for determining the acquisition information according to the original acquisition information.
13. The apparatus of claim 8, further comprising:
a fourth determination module for determining actual information for the target object; and
and the fifth determining module is used for determining the accuracy of the perception information according to the actual information.
14. The apparatus of claim 8, further comprising:
the second generation module is used for responding to a received generation instruction and generating a virtual reference object in the virtual environment scene according to the generation instruction; and
a sixth determining module, configured to determine, according to the map data corresponding to the virtual environment scene, a position of the virtual reference object in the virtual environment scene as the first position.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1 to 7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 7.
18. A virtual object simulation device, comprising:
the electronic device of claim 15; and
the sensing mechanism comprises a rack and at least one sensor arranged on the rack, and each sensor is used for detecting a target object around the rack to obtain acquired information.
19. The apparatus of claim 18, wherein the at least one sensor comprises at least one of: laser radar, millimeter wave radar, ultrasonic radar, image acquisition device and inertial navigation device.
CN202211107262.9A 2022-09-09 2022-09-09 Virtual object simulation method, device, equipment and storage medium Pending CN115657494A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211107262.9A CN115657494A (en) 2022-09-09 2022-09-09 Virtual object simulation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211107262.9A CN115657494A (en) 2022-09-09 2022-09-09 Virtual object simulation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115657494A true CN115657494A (en) 2023-01-31

Family

ID=84984330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211107262.9A Pending CN115657494A (en) 2022-09-09 2022-09-09 Virtual object simulation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115657494A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116734892A (en) * 2023-08-15 2023-09-12 腾讯科技(深圳)有限公司 Method, device, equipment and medium for processing driving data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116734892A (en) * 2023-08-15 2023-09-12 腾讯科技(深圳)有限公司 Method, device, equipment and medium for processing driving data
CN116734892B (en) * 2023-08-15 2023-11-03 腾讯科技(深圳)有限公司 Method, device, equipment and medium for processing driving data

Similar Documents

Publication Publication Date Title
US11783590B2 (en) Method, apparatus, device and medium for classifying driving scenario data
CN111998860B (en) Automatic driving positioning data verification method and device, electronic equipment and storage medium
KR20210127121A (en) Road event detection method, apparatus, device and storage medium
CN111324945B (en) Sensor scheme determining method, device, equipment and storage medium
CN113341935A (en) Vehicle testing method, device, testing equipment, system and storage medium
CN111680362A (en) Method, device and equipment for acquiring automatic driving simulation scene and storage medium
CN112987593B (en) Visual positioning hardware-in-the-loop simulation platform and simulation method
CN114120650B (en) Method and device for generating test results
CN113704116A (en) Data processing method, device, electronic equipment and medium for automatic driving vehicle
KR102606423B1 (en) Method, apparatus, and device for testing traffic flow monitoring system
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN115221722B (en) Simulation test method, model training method and equipment for automatic driving vehicle
CN218332314U (en) HIL simulation test platform based on intelligent driving area controller
CN112699765A (en) Method and device for evaluating visual positioning algorithm, electronic equipment and storage medium
CN115657494A (en) Virtual object simulation method, device, equipment and storage medium
CN113119999B (en) Method, device, equipment, medium and program product for determining automatic driving characteristics
CN112651535A (en) Local path planning method and device, storage medium, electronic equipment and vehicle
CN115082690B (en) Target recognition method, target recognition model training method and device
CN115374016A (en) Test scene simulation system and method, electronic device and storage medium
CN115575931A (en) Calibration method, calibration device, electronic equipment and storage medium
CN115357500A (en) Test method, device, equipment and medium for automatic driving system
CN113885496A (en) Intelligent driving simulation sensor model and intelligent driving simulation method
CN115587496B (en) Test method, device, equipment, system and storage medium based on vehicle-road cooperation
CN111597940A (en) Method and device for evaluating rendering model, electronic equipment and readable storage medium
CN112560258A (en) Test method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination