WO2022141294A1 - Simulation test method and system, simulator, storage medium, and program product - Google Patents

Simulation test method and system, simulator, storage medium, and program product Download PDF

Info

Publication number
WO2022141294A1
WO2022141294A1 PCT/CN2020/141798 CN2020141798W WO2022141294A1 WO 2022141294 A1 WO2022141294 A1 WO 2022141294A1 CN 2020141798 W CN2020141798 W CN 2020141798W WO 2022141294 A1 WO2022141294 A1 WO 2022141294A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
scene images
images
movable platform
image
Prior art date
Application number
PCT/CN2020/141798
Other languages
French (fr)
Chinese (zh)
Inventor
张树汉
刘天博
应佳行
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080081655.XA priority Critical patent/CN114846515A/en
Priority to PCT/CN2020/141798 priority patent/WO2022141294A1/en
Publication of WO2022141294A1 publication Critical patent/WO2022141294A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B17/00Systems involving the use of models or simulators of said systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Definitions

  • the embodiments of the present application relate to the technical field of visual processing, and in particular, to a simulation testing method, system, simulator, storage medium, and program product.
  • intelligent control systems based on visual images such as automatic driving systems
  • the intelligent control system based on visual images can perform corresponding operations through preset algorithms according to the images collected by the camera, so as to realize functions such as automatic driving and provide convenience for users.
  • the system In order to ensure the performance of the system, the system needs to be tested during the product development and verification stage of the system.
  • the intelligent control system of the present application based on stereo vision, it is difficult to realize the test of the system through the conventional visual simulation technology, and can only rely on the actual road test, the test efficiency is low, and the cost is high.
  • Embodiments of the present application provide a simulation testing method, system, simulator, storage medium and program product, which are used to implement testing of a control system based on stereo vision.
  • an embodiment of the present application provides a simulation test method, which is applied to a simulation test system; the simulation test system is used to test a control system of a movable platform based on a virtual scene model, and the scene model includes multiple scene elements; the control system is configured to, based on scene images observed by at least two visual sensors, output a control signal for controlling the movable platform; the method includes:
  • the plurality of scene images are output, so that the control system generates corresponding control signals according to the plurality of scene images.
  • an embodiment of the present application provides a simulation testing method, including:
  • control signal is a control signal for controlling the movable platform output by the control system according to the historically input scene image and a preset control model
  • the scene model includes multiple scene elements
  • the at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image The deviation is determined according to the relative pose between the visual sensors of the movable platform;
  • the outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene elements is changed, observed by the vision sensor of the movable platform to the scene image of the scene element.
  • an embodiment of the present application provides a simulation test system, where the simulation test system is used to test a control system of a movable platform based on a virtual scene model, where the scene model includes a plurality of scene elements;
  • the control system is configured to output a control signal for controlling the movable platform based on scene images observed by at least two visual sensors;
  • the simulation test system includes an emulator and an image output device
  • the simulator is used to obtain the control signal of the movable platform output by the control system, and based on the control signal, simulate the movement of the movable platform in the scene model, and obtain the relationship between the movable platform and the relative poses between scene elements, and generate an observable scene image based on the relative poses;
  • the image output device is configured to acquire the scene image generated by the simulator, and output a plurality of scene images according to the acquired scene image, and the plurality of scene images include at least two images when the movable platform moves in the scene model. scene images observed by the visual sensor, so that the control system generates corresponding control signals according to the plurality of scene images.
  • an embodiment of the present application provides a simulation test system, including: a simulator and an image output device;
  • the simulator is used to obtain the control signal output by the control system to be tested, and determine the corresponding scene image according to the control signal and the virtual scene model; wherein, the virtual scene model includes a plurality of scene elements;
  • the image output device is configured to acquire the scene image, and output at least two scene images according to the scene image;
  • the virtual scene model includes a plurality of scene elements
  • the at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image The deviation is determined according to the relative pose between the visual sensors of the movable platform;
  • the control signal is a control signal for controlling the movable platform output by the control system according to the historically input at least two scene images and a preset control model;
  • the outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene elements is changed, observed by the vision sensor of the movable platform to the scene image of the scene element.
  • an embodiment of the present application provides an emulator, including: a memory and at least one processor;
  • the memory stores computer-executable instructions
  • the at least one processor executes computer-executable instructions stored in the memory to cause the at least one processor to perform the method of the first aspect or the second aspect.
  • an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored; when the computer program is executed, the method described in the first aspect or the second aspect is implemented .
  • an embodiment of the present application provides a computer program product, including a computer program, which implements the method described in the first aspect or the second aspect when the computer program is executed by a processor.
  • the simulation test system is used to test the control system of the movable platform based on a virtual scene model, and the scene model includes a plurality of scene elements
  • the control system is configured to output a control signal for controlling the movable platform based on scene images observed by at least two visual sensors, wherein the simulation testing system can specifically acquire the movable platform output by the control system.
  • the control signal of the platform based on the control signal, simulate the movement of the movable platform in the scene model, obtain the relative pose between the movable platform and the scene element, and generate the relative pose according to the relative pose
  • a plurality of scene images include scene images observed by at least two of the visual sensors when the movable platform moves in the scene model, and the plurality of scene images are output, so that
  • the control system generates corresponding control signals according to the plurality of scene images, so that the actual image input of the control system can be simulated by generating scene images observable by at least two vision sensors, so as to realize the simulation of the control system based on stereo vision Test, test and verify the basic functions of the control system economically and efficiently, solve practical problems such as unreliability, incomplete scene test categories and high investment caused by relying on a large number of road tests, effectively improve test efficiency and reduce test costs.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a simulation testing method provided in an embodiment of the present application.
  • 3A is a schematic flowchart of another simulation testing method provided by an embodiment of the present application.
  • 3B is a schematic diagram of the positions of scene elements in a scene image according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of another simulation testing method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a test architecture based on multiple display devices provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a calibration pattern provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a test architecture based on an optical system provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a test architecture based on a 3D display device provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a test architecture based on an image output interface provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a test architecture based on a direct connection of an emulator provided by an embodiment of the present application
  • FIG. 11A is a schematic structural diagram of a simulation testing device provided by an embodiment of the present application.
  • FIG. 11B is a schematic structural diagram of another simulation testing device provided by an embodiment of the present application.
  • 12A is a schematic structural diagram of a simulation test system provided by an embodiment of the application.
  • FIG. 12B is a schematic structural diagram of another simulation test system provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of an emulator provided by an embodiment of the present application.
  • the solutions provided by the embodiments of the present application can be applied to simulation testing of the control system of the movable platform.
  • the movable platform can be any device that can move autonomously, such as a vehicle, a ship, an aircraft, an intelligent robot, and the like.
  • the control system may be a control system based on visual images, and the control system can obtain images of the surrounding environment through the visual sensor of the movable platform, and output corresponding control signals according to the environment images, so as to realize the control of the movable platform.
  • simulation test verification is often essential for control systems based on visual images.
  • a camera can be used as an image input to test the performance of the system in a simulated environment.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • the driving control system and the camera connected to it constitute the tested system.
  • the simulator can output road video to the display device, and the display device displays it. After the camera captures the road video displayed by the display device,
  • the automatic driving system can output driving control signals according to the road video collected by the camera, and the simulator further adjusts the output road video according to the driving control signals.
  • the simulation test can play the road video to the camera for "viewing”, so that the driving control system "thinks” that it is driving on the actual road, and then makes the same calculation as the actual road test and outputs the corresponding driving control signal to the simulator.
  • the simulator determines the next frame of road image through the control signal output by the driving control system and outputs it, so as to realize the test verification process.
  • HIL Hardware In Loop
  • A-SPICE Process Improvement and Capacity Determination
  • the simulation test of hardware-in-the-loop can be realized, so that the product developed by using the vision technology does not need to be tested in the actual use scene when performing functional verification.
  • the driving control system based on a single camera does not have the problem of stereo depth, so the simulation test can be realized through a single display device.
  • the obtained data can be obtained by means of a single display device. After the image is input to the system through two cameras, the stereoscopic scene described by the video stream cannot be reconstructed by stereo matching, because the image obtained by the two cameras is a plane, not an actual stereoscopic scene, and the depth information is lost in principle. As a result, the stereo information cannot be reconstructed twice. Therefore, the driving control system based on stereo vision is difficult to be tested by conventional visual simulation technology, and can only rely on the actual road test, which has low test efficiency and high cost.
  • the embodiment of the present application provides a simulation test method, which can obtain the driving control signal output by the driving control system to be tested, and simulate the vehicle in the virtual scene model according to the driving control signal and the preset virtual scene model. and generate scene images that can be observed by multiple visual sensors of the vehicle, so that the driving control system can construct stereoscopic information according to the generated multiple scene images and output corresponding driving control signals, so as to realize driving based on stereo vision.
  • Simulation test of the control system so as to cost-effectively test and verify most of the basic functions of the driving control system in the simulation test. Improve test efficiency and reduce test cost.
  • FIG. 2 is a schematic flowchart of a simulation testing method provided by an embodiment of the present application.
  • the method in this embodiment can be applied to a simulation test system.
  • the simulation test system can be used to test the control system of the movable platform based on a virtual scene model, and the scene model includes a plurality of scene elements.
  • the control system may be configured to output control signals for controlling the movable platform based on images of the scene observed by at least two vision sensors.
  • the simulation testing system may include an emulator, and the emulator executes the following method, or may include an emulator and other devices, and the emulator and other devices jointly complete the following method. As shown in Figure 2, the method may include:
  • Step 201 Acquire the control signal of the movable platform output by the control system.
  • the embodiments of the present application can be applied to the simulation test of the control system of any type of movable platform.
  • this embodiment is described by taking the test of the driving control system of the vehicle as an example. It can be understood that the method is also suitable for testing control systems of other movable platforms.
  • the control system in this step may be a driving control system applied to a vehicle
  • the output control signal may be a driving control signal
  • the movable platform may be a vehicle.
  • the driving control system can be implemented based on software, hardware or a combination of software and hardware.
  • the driving control signal output by the driving control system to be tested may be obtained, and the driving control signal is the driving control signal output by the driving control system according to the historically input scene image and the preset driving control model for controlling the vehicle. driving control signal.
  • the driving control system may be an automatic driving system (Autopilot System), an automatic driving assistance system (Automatic Driving Assistance System, ADAS) or any system capable of realizing driving control.
  • the vehicle may be provided with multiple visual sensors, and the multiple visual sensors may be set at different positions of the vehicle, for example, may be set at different positions on the front windshield of the vehicle, and the multiple visual sensors at different positions may collect to different pictures, so that the depth information of the surrounding environment can be constructed through multiple pictures for better driving control.
  • the driving control model may be a preset driving control model, the input of the model may include images respectively collected by a plurality of visual sensors, and the output may include corresponding driving control signals.
  • the driving control signal may be used to control the vehicle.
  • the driving control signal may be used to control the speed, direction, braking and other aspects of the vehicle.
  • the driving control signal is generated based on images collected by multiple visual sensors. For example, when it is determined that there is an obstacle ahead according to the images collected by multiple visual sensors, the output driving control signal may be: braking; When the image collected by each visual sensor determines that the road ahead is flat and free of obstacles, the output driving control signal can be: driving at acceleration a; and so on.
  • the driving control signal may also be comprehensively determined on the basis of images collected by multiple visual sensors and in combination with other information such as route planning, road control information, and the like.
  • the output driving control signal may be: turn left.
  • the movable platform may be a physical movable platform or a virtual movable platform
  • the visual sensor may be a physical visual sensor or a virtual visual sensor
  • the driving control system may be connected to a simulator, the simulator may output scene images to the driving control system, and the driving control system may output driving control signals to the simulator according to historically input scene images and a preset driving control model. , and then the simulator updates the scene image according to the driving control signal and outputs it to the driving control system.
  • the simulator can update the position of the vehicle in the virtual scene in real time according to the driving control signal, and then update the scene image that the driving control system can "view".
  • the scene image will also change accordingly, similar to a racing game, which is equivalent to the driving control system controlling the vehicle to run in a virtual scene.
  • Step 202 Based on the control signal, simulate the movement of the movable platform in the scene model to obtain the relative pose between the movable platform and the scene element.
  • the scene element can be any element in the scene model, for example, during the road simulation test, the scene element can include lane lines, roadblocks, trees, pedestrians, traffic lights, etc., for simulating the actual road surroundings.
  • the relative pose may include position and/or angle information of the movable platform relative to one or more scene elements in the scene model.
  • Step 203 Generate a plurality of scene images according to the relative poses, where the plurality of scene images include scene images observed by at least two of the visual sensors when the movable platform moves in the scene model.
  • At least two vision sensors constitute a multi-eye vision sensor, which can collect images of the surrounding environment.
  • the embodiments of the present application do not limit the number and pose of the visual sensors, the number of sensors may be two or more, and the multiple sensors may be arranged left and right, up and down, or in any other manner.
  • the position, angle and other information of the visual sensor installed on the vehicle relative to the surrounding scene elements can be determined, and then the scene image observed by the visual sensor can be determined.
  • the number of the multiple scene images may be consistent with the number of visual sensors.
  • the control system uses the images collected by the left and right visual sensors of the vehicle as input, the multiple scene images generated in this step may be It includes the scene image observable by the left vision sensor and the scene image observable by the right vision sensor.
  • Step 204 Output the multiple scene images, so that the control system generates corresponding control signals according to the multiple scene images.
  • the simulator can output corresponding multiple scene images according to the driving control signals obtained from the driving control system, and the output multiple scene images can be used to simulate images with parallax captured by multiple visual sensors of the vehicle.
  • the scene image may be a scene image that can be observed after the vehicle moves in the scene model in response to the control signal.
  • the simulator can output a video stream with parallax as the vehicle continues to move forward. This creates a process of constantly controlling and updating the scene image, enabling testing of the driving control system.
  • the scene image may not be directly output to the control system by the emulator, but input to the control system by other devices, which is not limited in this embodiment.
  • the movable platform may be a ship
  • the scene model may be a model corresponding to an actual working scene of the ship, for example, a model corresponding to a river.
  • the ship may be installed with a control system and a plurality of visual sensors, and the method described in this embodiment can implement the test of the control system of the ship, thereby assisting in the realization of the automatic driving test verification of the ship.
  • the movable platform may be an intelligent robot
  • the scene model may be a model corresponding to an actual working scene of the robot, for example, a model corresponding to a shopping mall or a warehouse.
  • the intelligent robot can be equipped with a control system and a plurality of visual sensors, and the method described in this embodiment can realize the test of the control system of the intelligent robot, thereby assisting in realizing the autonomous movement test verification of the intelligent robot.
  • the simulation test method provided in this embodiment can be applied to a simulation test system.
  • the simulation test system is used to test a control system of a movable platform based on a virtual scene model.
  • the scene model includes a plurality of scene elements, and the control system for outputting a control signal for controlling the movable platform based on scene images observed by at least two visual sensors, wherein the simulation test system can specifically acquire the control signal of the movable platform output by the control system , based on the control signal, simulate the movement of the movable platform in the scene model, obtain the relative pose between the movable platform and the scene element, and generate a plurality of scene images according to the relative pose , the plurality of scene images include scene images observed by at least two of the visual sensors when the movable platform moves in the scene model, and output the plurality of scene images, so that the control system Corresponding control signals are generated according to the plurality of scene images, so that the actual image input of the control system can be simulated by generating scene images observable by at
  • generating a plurality of scene images according to the relative poses may include:
  • an imaging change of the scene element observable by the movable platform is determined, and a plurality of scene images are generated based on the imaging change.
  • the relative pose may include position and/or angle information of the movable platform relative to one or more scene elements in the scene model. Based on the relative pose, scene elements that can be observed by the movable platform can be determined. The imaging information of the scene element on the vision sensor mounted on the movable platform may be determined based on the relative pose.
  • the imaging of the scene element will also change. Based on this imaging change, the imaging information of the scene element observed by the vision sensor after the relative pose is changed can be determined, so as to generate a corresponding image. scene image.
  • the imaging changes of the scene elements that can be observed by the movable platform are determined by the relative pose, and multiple scene images are generated based on the imaging changes, so that the scene images that can be observed by the movable platform can be accurately constructed, and the simulation is improved. Efficiency and accuracy of testing.
  • all the scene images may be generated specifically according to the relative pose and stereo vision parameters between the movable platform and the scene element. describe multiple scene images.
  • the stereo vision parameters may include, but are not limited to, the height of the vision sensor, the shooting angle, the baseline parameters, and the like.
  • the relative pose and stereo vision parameters of the movable platform and at least one scene element information such as the position and angle of the vision sensor installed on the movable platform relative to the surrounding scene elements can be determined, and then the vision sensor can be determined.
  • the observed scene image so that the scene image that can be observed by each vision sensor can be generated more accurately, and the accuracy of the simulation test can be further improved.
  • the stereo vision parameter is determined by the installation pose of the at least two vision sensors and/or the relative pose between the at least two vision sensors.
  • outputting the plurality of scene images includes:
  • sending the plurality of scene images to the control system through an image output interface includes:
  • outputting the multiple scene images, so that the control system generates corresponding control signals according to the multiple scene images includes:
  • the number of the visual sensors is the same as the number of displayed scene images; the at least two visual sensors are in one-to-one correspondence with the plurality of scene images, and the visual sensors are used to photograph the corresponding scene images.
  • the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used for Shoot the picture displayed by the corresponding display device;
  • sending the plurality of scene images to a display device, so as to display the plurality of scene images through the display device includes:
  • Each scene image is sent to its corresponding display device for display, so as to display the plurality of scene images through at least two of the display devices.
  • the method before each scene image is sent to its corresponding display device for display, the method further includes:
  • Image conversion is performed on at least part of the multiple scene images according to the calibration parameters corresponding to the visual sensor.
  • multiple scene images are generated according to the relative pose, including:
  • a plurality of scene images are generated according to the relative poses between the movable platform and the scene elements, stereo vision parameters, and calibration parameters corresponding to the vision sensor.
  • the method further includes:
  • the vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
  • the display device is a 3D display device
  • Sending the plurality of scene images to a display device to display the plurality of scene images through the display device including:
  • the method further includes:
  • the sensing information is output, so that the control system determines a corresponding control signal according to the plurality of scene images and the sensing information.
  • the sensing information includes point cloud data corresponding to the scene element.
  • the method further includes:
  • the control system is evaluated according to the operating state of the movable platform.
  • the movable platform is a vehicle
  • the control system is a driving control system applied to the vehicle.
  • FIG. 3A is a schematic flowchart of another simulation testing method provided by an embodiment of the present application.
  • the method in this embodiment may be executed by an emulator, or may be implemented jointly by the emulator and other devices. As shown in Figure 3A, the method may include:
  • Step 301 Acquire a control signal output by a control system to be tested, where the control signal is a control signal output by the control system according to historically input scene images and a preset control model for controlling the movable platform.
  • Step 302 output at least two scene images according to the control signal and the virtual scene model.
  • the scene model includes multiple scene elements.
  • the at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image
  • the deviation is determined according to the relative pose between the visual sensors of the movable platform.
  • the outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene elements is changed, observed by the vision sensor of the movable platform to the scene image of the scene element.
  • the outputted multiple scene images may be multiple scene images with parallax.
  • FIG. 3B is a schematic diagram of positions of scene elements in a scene image according to an embodiment of the present application. As shown in FIG. 3B , the left and right images are two output scene images, and the two scene images have parallax.
  • the position of the tree can be different, because the two eyes of a person have a certain distance, and the left and right eyes look at the object. Seeing from different angles will cause objects to be in different positions in the view of the left and right eyes, resulting in parallax.
  • the control system based on stereo vision uses this principle to construct the depth information of the surrounding environment through multiple images with parallax, so as to control the vehicle more accurately.
  • At least two scene images generated according to the scene model may be output to the control system, and the same scene element in the scene model may be located in different pixel areas in different scene images.
  • the position of the tree in the first scene image may be different from its position in the second scene image.
  • the tree may be located in the first pixel area in the first scene image
  • in the second scene image may be located in the second pixel area.
  • the position of the first pixel area in the first scene image and the position of the second pixel area in the second scene image are not completely coincident, but have a certain deviation.
  • the positional deviation can be determined according to the relative pose between the visual sensors of the vehicle.
  • the simulation testing method provided in this embodiment can acquire the control signal output by the control system to be tested, where the control signal is output by the control system according to the historically input scene image and the preset control model for controlling the movable platform and output at least two scene images according to the control signal and the virtual scene model, wherein the scene model includes a plurality of scene elements, and the scene images are the movable platform in response to the control
  • the scene images of the scene elements observed after the signal moves, the positional deviations of the scene elements in different scene images are determined according to the relative poses between the visual sensors of the movable platform, so that multiple scenes with parallax can be passed through.
  • the image simulates the actual image input of the control system, realizes the simulation test of the control system based on stereo vision, tests and verifies the basic functions of the control system economically and efficiently, and solves the unreliability and incomplete test categories caused by relying on a large number of road tests. And practical problems such as high investment, effectively improve the test efficiency and reduce the test cost.
  • FIG. 4 is a schematic flowchart of another simulation testing method provided by an embodiment of the present application.
  • the method may include:
  • Step 401 Obtain a control signal output by a control system to be tested, where the control signal is a control signal output by the control system according to historically input scene images and a preset control model for controlling the movable platform.
  • step 401 for the specific implementation principle and process of step 401 in this embodiment, reference may be made to the foregoing embodiments, and details are not repeated here.
  • Step 402 Determine at least two scene images according to the control signal and the virtual scene model.
  • At least two scene images can be output according to the control signal and the virtual scene model through steps 402 to 403 .
  • at least two scene images may be determined according to the control signal and the virtual scene model, and then the at least two scene images are output.
  • outputting at least two scene images according to the control signal and the virtual scene model may include: determining at least two scene images according to the control signal, the virtual scene model, and stereoscopic parameters; outputting the At least two scene images.
  • the stereo vision parameter may be any parameter used to impart parallax to the scene image.
  • the stereo vision parameter is determined by the installation pose of the vision sensor of the movable platform and/or the relative pose between the vision sensors of the movable platform.
  • the installation posture may include an installation position and/or an installation angle, etc.
  • the relative posture may include a relative position and/or angle of the visual sensors to each other, and the like.
  • the stereo vision parameters may include baseline (Baseline) parameters between vision sensors, and the like.
  • the baseline parameter is used to represent the center distance of the visual sensor, and the parallax size between the output scene images is set by the baseline parameter, so that the scene image can be generated quickly and accurately.
  • determining at least two scene images according to the control signal, the virtual scene model and the stereo vision parameters may include: determining the movable platform and the scene according to the control signal and the virtual scene model. relative pose of the element; at least two scene images are determined according to the relative pose of the movable platform and the scene element and stereo vision parameters.
  • the movable platform is a vehicle.
  • the driving control system outputs a driving control signal
  • the vehicle is regarded as moving in the virtual scene model, which is equivalent to a change in the position of the vehicle relative to scene elements such as trees.
  • At least two scene images to be output can be determined according to the relative pose and stereo vision parameters of the vehicle and the scene elements.
  • the specific method for determining the scene image according to the relative pose and the stereo vision parameter can be implemented through a simulation experiment, which is not repeated in this embodiment of the present application.
  • the relative pose of the movable platform and the scene elements is determined by the control signal and the virtual scene model, and at least two scene images are determined according to the relative pose and stereo vision parameters, which can accurately simulate the observation of the movable platform in the actual scene. images to further improve the test accuracy.
  • Step 403 Send the at least two scene images to a display device, so as to display the at least two scene images through the display device, so that the control system is based on the images captured by the at least two visual sensors and the preset The control model determines the corresponding control signal.
  • At least two scene images can be output through step 403 .
  • the visual sensor may be a device capable of capturing images, such as a camera, a camera, or the like.
  • the number of the visual sensors is the same as the number of output scene images; the at least two visual sensors are in one-to-one correspondence with the at least two scene images, and the visual sensors are used to photograph the corresponding scene images; the The relative pose between the at least two vision sensors is determined by the relative pose between the vision sensors of the movable platform.
  • the at least two visual sensors are in a one-to-one correspondence with the at least two scene images, which may mean that there are n visual sensors and n scene images, wherein the ith visual sensor corresponds to the ith scene image , that is, the ith visual sensor is used to capture the ith scene image displayed by the display device, where i takes a value from 1 to n, and n is a positive integer ⁇ 2.
  • the at least two visual sensors used in the simulation test can be used to simulate the visual sensors used by the vehicle in practical applications.
  • the relative pose between the at least two vision sensors may be determined from the relative pose between the visual sensors in the practical application.
  • the at least two vision sensors used in the test may be configured according to the vision sensors of the actual vehicle, for example, may be configured according to the vision sensors of the actual vehicle.
  • the number, resolution, position, shooting angle, etc. of the sensors are used to determine the number, resolution, position, shooting angle, etc. of the at least two visual sensors during the test, so that the visual sensor during testing is kept with the visual sensor during actual use. Consistent, improving the accuracy of the test.
  • the simulation test method provided in this embodiment can realize the test of the control system and the visual sensor.
  • the simulator can play the scene image to the visual sensor for "viewing" through the display device, and the control system processes the image collected by the visual sensor and outputs the corresponding control signal.
  • This test method can realize the hardware-in-the-loop experiment and completely test the The hardware and software parts of the whole system fill the experimental gap of the stereo vision hardware-in-the-loop simulation test.
  • the number of the display devices may be one or more, for example, multiple display devices may be set to display corresponding scene images respectively, or, It is also possible to set a display device to display each scene image on different display areas of the display device.
  • the number of the display devices is the same as the number of the visual sensors, the at least two display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used to display the corresponding display device. screen to shoot.
  • sending the at least two scene images to a display device to display the at least two scene images through the display device may include: sending each scene image to its corresponding display device for display, so as to display the at least two scene images through the display device.
  • the at least two scene images are displayed by at least two of the display devices.
  • the at least two display devices are in one-to-one correspondence with the at least two visual sensors, which may mean that there are n visual sensors and n display devices, wherein the ith visual sensor corresponds to the ith display device, That is, the i-th visual sensor is used to capture a picture of the i-th display device, where i takes a value from 1 to n, and n is a positive integer ⁇ 2.
  • FIG. 5 is a schematic diagram of a test architecture based on multiple display devices according to an embodiment of the present application.
  • the control system and the visual sensor connected to it constitute the tested system.
  • the display device on the right correspondingly, there are also two output scene images, which are respectively recorded as the left image and the right image.
  • the left image can be sent to the left display device, and the right image can be sent to the right display device, and the two display devices can be synchronized respectively. Play the parallax images on the left and right sides.
  • the left camera and the right camera shoot the pictures displayed by the left display device and the right display device respectively, so that different images can be collected by different cameras and input to the control system for processing, realizing the combination of software and hardware. test.
  • the monocular system can perceive the change of the environment through the change of the frame before and after, while the image obtained by the binocular system has parallax.
  • the depth information of the target object can be inferred.
  • this embodiment is based on this principle, using two display devices to play the video stream formed by the scene image respectively, and using two cameras to capture the video stream played by each display device and transmit it to the control system, realizing the simulation environment.
  • the stereo imaging experiment so that the stereo simulation video stream can be generated to the tested system under indoor conditions to achieve the same test effect as the real scene.
  • the vision sensor may also be calibrated by a camera calibration method to determine the calibration parameters of the vision sensor, and determine the output scene image according to the calibration parameters.
  • the camera calibration method may include calibration methods such as the checkerboard method.
  • FIG. 6 is a schematic diagram of a calibration pattern provided by an embodiment of the present application.
  • the pattern shown in Figure 6 can be used.
  • the two display devices can play the calibration pattern respectively, collect the calibration image by the corresponding vision sensor, use the calibration algorithm to calculate the rotation and/or translation parameters of the pattern, mark it as the calibration parameter of the vision sensor, and feed the parameter back
  • the simulator is provided with reverse rotation and/or translation of the output image by the simulator, so that the images collected by the two vision sensors can be aligned, so as to achieve good depth matching of the stereo vision algorithm.
  • each image displayed by the left display device can be rotated 5° clockwise to ensure Left and right images are aligned.
  • image transformation may be performed on at least part of the at least two scene images according to calibration parameters corresponding to the visual sensor.
  • the simulator may first obtain at least two scene images with parallax, and then perform image transformation on at least part of the at least two scene images according to the calibration parameters, for example, perform rotation and/or translation, and finally Get aligned images.
  • the image transformation can be implemented through an emulator, or can be implemented by adding other modules after the emulator.
  • the at least two scene images can be directly adjusted according to the changed calibration parameters, and the solution is flexible and easy to implement.
  • outputting at least two scene images according to the control signal and the virtual scene model may include: according to the control signal, the virtual scene model, stereo vision parameters, and calibration parameters corresponding to the vision sensor, determining at least two scene images; outputting the at least two scene images.
  • the simulator can directly obtain at least two scene images with parallax and aligned according to the control signal, the virtual scene model, the stereo vision parameters and the calibration parameters corresponding to the vision sensor, so that the final output scene image can be quickly determined , to improve the efficiency of simulation testing.
  • FIG. 7 is a schematic diagram of a test architecture based on an optical system provided by an embodiment of the present application.
  • the solution shown in FIG. 7 is based on the solution shown in FIG. 5 , and an optical system is added.
  • the optical system may be disposed between the display device and the visual sensor, and the optical system is used to perform optical conversion on the image output by the display device, so that the converted image matches the field of view of the visual sensor angle (FOV).
  • FOV field of view of the visual sensor angle
  • the number of the optical systems may be one or more, one optical system may be configured for each display device, or one optical system may be shared by multiple display devices.
  • the structure of the optical system can be designed according to actual needs, as long as the image can be converted to match the field of view of the vision sensor.
  • the direct display method through the display device usually makes the entire test system very large. Therefore, an optical system can be introduced in this embodiment, and the optical system can perform optical conversion on the image output by the display device and enlarge the output image, thereby matching the field of view of the vision sensor, which can effectively reduce the volume of the imaging part without using a huge display equipment, reducing the test footprint.
  • FIG. 8 is a schematic diagram of a test architecture based on a 3D display device provided by an embodiment of the present application.
  • the display device may be a 3D display device.
  • sending the at least two scene images to a display device to display the at least two scene images through the display device may include: sending the at least two scene images to the 3D display device, so that the 3D display device displays the at least two scene images by means of 3D projection.
  • polarization direction that is, the intensity of light waves vibrating in all directions is the same, which is usually called natural light.
  • natural light For example, sunlight is one of the most common natural light; there is a fixed polarization direction.
  • polarized light the light emitted by the screen of a display device can be polarized light.
  • different polarization directions can be set for different scene images, thereby forming 3D images.
  • the vision sensor can be provided with a polarizer that is consistent with the polarization direction of the polarized light emitted by the display device.
  • the polarizer only allows light with a specific polarization direction to enter the vision sensor, so that images with a specific polarization direction can be accurately collected without being affected by other polarizations. The effect of the orientation of the image.
  • the polarization directions of the two scene images may be orthogonal.
  • Each vision sensor is aligned with the polarization direction of the corresponding image, respectively.
  • the polarization direction of the first scene image is direction 1
  • the polarization direction of the second scene image is direction 2
  • the first vision sensor is used to collect the first scene image
  • the second vision sensor is used to collect the second scene image
  • the first The polarization direction of one vision sensor may be direction 1
  • the polarization direction of the second vision sensor may be direction 2.
  • the control system and the visual sensor connected to it constitute the tested system.
  • the device is capable of displaying 3D images, so that two scene images can be displayed by one 3D display device for the left camera and the right camera to capture respectively.
  • At least two scene images can be played in 3D technology, and the two cameras to be tested can be equipped with orthogonal polarizers, similar to watching 3D movies, so that the two cameras can be Each camera can watch a picture at the same time, but due to the different polarization directions, the images collected by the two cameras are different, and the back-end stereo restoration of the control system can also be realized.
  • multiple scene images can be displayed on the same display device, and each scene image They do not interfere with each other, thereby reducing the number of equipment, reducing the area of the test site and reducing the test cost on the basis of ensuring the test accuracy.
  • FIG. 9 is a schematic diagram of a test architecture based on an image output interface provided by an embodiment of the present application.
  • outputting at least two scene images may include: sending the at least two scene images to the control system through an image output interface.
  • the image output interface may be integrated with the simulator, or may be provided separately from the simulator, or the image output interface may also be integrated with the control system.
  • the simulator can determine at least two scene images according to the control signal and the virtual scene model, and then send the at least two scene images through the image output interface Send it to the control system, and the control system can continue to generate and output the corresponding control signal according to the scene image.
  • the actual imaging input of the vision sensor can be discarded, and at least two scene images can be input to the control system in the form of digital signals directly through the image output interface.
  • This method belongs to the pseudo hardware-in-the-loop implementation.
  • the optical imaging part of the vision sensor has not been tested, but the control system can be tested, thereby realizing the testing of the system software.
  • control system can be tested independently, so as to further reduce the equipment used in the simulation test and improve the flexibility of the simulation test.
  • the control system can be tested separately in the early stage. After the control system has passed the test, a visual sensor can be added to further test the software and hardware. In this way, two tests can be implemented through a single simulator, which improves the test efficiency.
  • sending the at least two scene images to the control system through an image output interface may include: converting the at least two scene images into at least two scene images in a preset format; The formatted at least two scene images are sent to the control system through an image output interface.
  • the preset format may be the format output by the vision sensor in actual use.
  • the output format of the vision sensor to the control system is USB format.
  • the emulator can convert the scene image in HDMI format determined according to the control signal and the virtual scene model into USB format and send it to the control system through the image output interface, so that the test environment is closer to the actual use scene. Meet the input requirements of the control system, so that the test can be carried out smoothly.
  • FIG. 10 is a schematic diagram of a test architecture based on a direct connection of an emulator provided by an embodiment of the present application.
  • outputting at least two scene images may include: directly sending the at least two scene images to the control system.
  • the simulator can be directly connected with the control system, and the generated at least two scene images are directly sent to the control system.
  • the interface and protocol can be the same, and the control system does not need to pay attention to whether the acquired image is a real captured image or directly received image data.
  • the image output by the emulator can be directly sent to the control system without passing through other modules.
  • the structure is simple and easy to implement, which further reduces the cost of testing.
  • the above embodiment provides multiple implementation methods for stereo simulation testing. It should be noted that more implementation methods can be extended on this basis, as long as it is ensured that the image obtained by the control system is an image with parallax.
  • the scene image determined by the simulator is a single scene image
  • an image output interface can be set after the simulator, and the image output interface can output the at least two scene images according to the single scene image, thereby effectively Reduce the burden on the emulator.
  • the single scene image may be an observable scene image at any position of the movable platform, for example, a scene image observable by one of the vision sensors, or observable at a midpoint between two vision sensors to the scene image.
  • the image output interface may store stereo vision parameters to convert a single scene image into multiple scene images with parallax.
  • the image output interface may also store the information of each scene element in the virtual scene model, so as to restore the scene image observable by each visual sensor more accurately.
  • a module can also be added behind the vision sensor to add parallax to the scene image.
  • a conversion module is added after the visual sensors, and the scene images are converted by the conversion module to obtain at least two scene images with parallax and output them to Control System.
  • At least two scene images input to the control system have parallax, but the specific link in which the parallax is obtained is not limited in the embodiment of the present application.
  • the method further includes: determining corresponding sensing information according to the control signal and the virtual scene model; outputting the sensing information to The control system is made to determine a corresponding control signal according to the at least two scene images and the sensing information.
  • various types of sensors can be provided on the movable platform, and are not limited to vision sensors.
  • the operation of the control system in different states can be effectively tested to meet the needs of different application scenarios.
  • the sensing information may be any type of sensing information, including but not limited to: wind speed, temperature, humidity, weather, and the like.
  • the sensing information may include point cloud data.
  • point cloud data can be detected by lidar, and lidar can be used to assist in perceiving the surrounding environment.
  • determining the corresponding sensing information according to the control signal and the virtual scene model may include: determining the movable platform and the scene according to the control signal and the virtual scene model. relative poses between elements; point cloud data is determined according to the relative poses between the movable platform and the scene elements.
  • the simulator can also output point cloud data corresponding to the scene elements in the virtual scene model.
  • the point cloud data can describe the depth information of scene elements, so that the control system can determine the corresponding control signal according to the at least two scene images and the point cloud data, which can not only test the response of the control system to the scene images, but also The response of the test control system to the point cloud data can effectively improve the dimension of the test and improve the test effect.
  • the method further includes: determining the operating state of the movable platform according to the control signal; and, according to the operating state of the movable platform, The control system was evaluated.
  • the simulator can determine at least two scene images or sensor information to be displayed through the control signal and the virtual scene model, At least two scene images or sensing information are simulations of real environment data, and the driving control system can identify at least two scene images or sensing information, and obtain corresponding driving control strategies according to the identified information and preset driving control strategies. Decisions are control signals. For example, if a zebra crossing is detected, a deceleration signal is output.
  • the simulator After the driving control system outputs a control signal, the simulator will determine, according to the control signal, the running state of the vehicle driving according to the control signal, such as the position of the vehicle in the lane, the distance to surrounding obstacles, and so on.
  • control system can be evaluated according to the running state of the vehicle, for example, whether to deviate from the driving lane, whether to run a red light, whether it is too close to an obstacle, etc., and output the evaluation of the driving control system according to the judgment result.
  • the evaluation can be a rating or whether it is qualified or not.
  • the driving control system controls the vehicle to run a red light, collide with an obstacle or other behaviors or dangerous behaviors that do not follow the traffic rules during the simulation test, it can be considered that the driving control system is not. If qualified, the algorithm needs to be re-optimized.
  • the control system can be considered qualified. After passing the simulation test, an actual road test or other tests can be arranged, and after all the tests are completed, the driving control system can be put into use.
  • the evaluation of the control system can be effectively realized, the requirements of the simulation test can be met, most of the test problems can be solved or converged in advance, the cost of the later road test can be reduced, and the test efficiency can be improved.
  • FIG. 11A is a schematic structural diagram of a simulation testing apparatus provided by an embodiment of the present application.
  • the device is applied to a simulation test system; the simulation test system is used to test a control system of a movable platform based on a virtual scene model, and the scene model includes a plurality of scene elements; the control system is used to, based on at least two The scene image observed by the vision sensor outputs a control signal for controlling the movable platform.
  • the apparatus may include:
  • an acquisition module 1101, configured to acquire the control signal of the movable platform output by the control system
  • a simulation module 1102 configured to simulate the movement of the movable platform in the scene model based on the control signal, to obtain the relative pose between the movable platform and the scene element;
  • a generating module 1103, configured to generate a plurality of scene images according to the relative poses, where the plurality of scene images include scenes observed by at least two of the visual sensors when the movable platform moves in the scene model image;
  • the output module 1104 is configured to output the plurality of scene images, so that the control system generates corresponding control signals according to the plurality of scene images.
  • the generating module 1103 when the generating module 1103 generates multiple scene images according to the relative pose, it is specifically used for:
  • an imaging change of the scene element observable by the movable platform is determined, and a plurality of scene images are generated based on the imaging change.
  • the generating module 1103 when the generating module 1103 generates multiple scene images according to the relative pose, it is specifically used for:
  • the plurality of scene images are generated according to the relative pose and stereo vision parameters between the movable platform and the scene elements.
  • the stereo vision parameter is determined by the installation pose of the at least two vision sensors and/or the relative pose between the at least two vision sensors.
  • the output module 1104 when the output module 1104 outputs the multiple scene images, it is specifically used for:
  • the output module 1104 when the output module 1104 sends the plurality of scene images to the control system through an image output interface, it is specifically configured to:
  • the output module 1104 is specifically used for:
  • the number of the visual sensors is the same as the number of displayed scene images; the at least two visual sensors are in one-to-one correspondence with the plurality of scene images, and the visual sensors are used to photograph the corresponding scene images.
  • the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used for Shoot the picture displayed by the corresponding display device;
  • the output module 1104 when the output module 1104 sends the plurality of scene images to a display device to display the plurality of scene images through the display device, the output module 1104 is specifically configured to:
  • Each scene image is sent to its corresponding display device for display, so as to display the plurality of scene images through at least two of the display devices.
  • the output module 1104 before sending each scene image to its corresponding display device for display, the output module 1104 is further configured to:
  • Image conversion is performed on at least part of the multiple scene images according to the calibration parameters corresponding to the visual sensor.
  • the generating module 1103 when the generating module 1103 generates multiple scene images according to the relative pose, it is specifically used for:
  • a plurality of scene images are generated according to the relative poses between the movable platform and the scene elements, stereo vision parameters, and calibration parameters corresponding to the vision sensor.
  • the generating module 1103 is further configured to:
  • the vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
  • the display device is a 3D display device
  • the output module 1104 When the output module 1104 sends the plurality of scene images to a display device to display the plurality of scene images through the display device, the output module 1104 is specifically configured to:
  • the output module 1104 is further configured to:
  • the sensing information is output, so that the control system determines a corresponding control signal according to the plurality of scene images and the sensing information.
  • the sensing information includes point cloud data corresponding to the scene element.
  • the output module 1104 is further configured to:
  • the control system is evaluated according to the operating state of the movable platform.
  • the movable platform is a vehicle
  • the control system is a driving control system applied to the vehicle.
  • the simulation testing apparatus provided in this embodiment can be used to execute the simulation testing method shown in FIG. 2 , and the specific implementation principles and effects thereof can refer to the foregoing embodiments, which will not be repeated here.
  • FIG. 11B is a schematic structural diagram of another simulation testing apparatus provided by an embodiment of the present application. As shown in Figure 11B, the apparatus may include:
  • the acquisition module 1111 is used to acquire the control signal output by the control system to be tested, the control signal is the control signal output by the control system according to the historically input scene image and the preset control model for controlling the movable platform;
  • an output module 1112 configured to output at least two scene images according to the control signal and the virtual scene model
  • the virtual scene model includes a plurality of scene elements
  • the at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image The deviation is determined according to the relative pose between the visual sensors of the movable platform;
  • the outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene elements is changed, observed by the vision sensor of the movable platform to the scene image of the scene element.
  • the output module 1112 is specifically used for:
  • the at least two scene images are output.
  • the output module 1112 is specifically used for:
  • control signal and the virtual scene model determine the relative pose of the movable platform and the scene element
  • At least two scene images are determined according to the relative poses of the movable platform and the scene elements and stereo vision parameters.
  • the stereo vision parameter is determined by the installation pose of the vision sensor of the movable platform and/or the relative pose between the vision sensors of the movable platform.
  • the output module 1112 when the output module 1112 outputs at least two scene images, it is specifically used for:
  • the at least two scene images are sent to the control system through an image output interface.
  • the output module 1112 when the output module 1112 sends the at least two scene images to the control system through an image output interface, it is specifically used for:
  • the at least two scene images in the preset format are sent to the control system through an image output interface.
  • the output module 1112 when the output module 1112 outputs at least two scene images, it is specifically used for:
  • the number of the visual sensors is the same as the number of output scene images; the at least two visual sensors are in one-to-one correspondence with the at least two scene images, and the visual sensors are used to photograph the corresponding scene images.
  • the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used for Shoot the picture displayed by the corresponding display device;
  • the output module 1112 when the output module 1112 sends the at least two scene images to a display device, so as to display the at least two scene images through the display device, the output module 1112 is specifically configured to:
  • Each scene image is sent to its corresponding display device for display, so as to display the at least two scene images through at least two of the display devices.
  • the output module 1112 is further configured to:
  • image conversion is performed on at least part of the at least two scene images according to the calibration parameters corresponding to the visual sensor.
  • the output module 1112 when outputting at least two scene images according to the control signal and the virtual scene model, is specifically configured to:
  • the at least two scene images are output.
  • the output module 1112 is further configured to:
  • the vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
  • the display device is a 3D display device
  • the output module 1112 When the output module 1112 sends the at least two scene images to a display device, so as to display the at least two scene images through the display device, the output module 1112 is specifically configured to:
  • the at least two scene images are sent to the 3D display device, so that the 3D display device displays the at least two scene images by means of 3D projection.
  • the output module 1112 is further configured to:
  • the sensing information is output, so that the control system determines a corresponding control signal according to the at least two scene images and the sensing information.
  • the sensing information includes point cloud data corresponding to the scene element
  • the output module 1112 is specifically used for:
  • Point cloud data is determined according to the relative pose between the movable platform and the scene element.
  • the output module 1112 is further configured to:
  • the control system is evaluated according to the operating state of the movable platform.
  • control system is a driving control system applied to a vehicle
  • the movable platform is a vehicle
  • the simulation testing apparatus provided in this embodiment can be used to execute the simulation testing methods in the embodiments shown in FIG. 3A to FIG. 10 , and the specific implementation principles and effects can be found in the foregoing embodiments, which are not repeated here.
  • FIG. 12A is a schematic structural diagram of a simulation testing system provided by an embodiment of the present application.
  • the simulation test system is used for testing the control system of the movable platform based on a virtual scene model, the scene model includes a plurality of scene elements; the control system is used for, based on the scene images observed by at least two visual sensors, output Control signals for controlling the movable platform.
  • the system may include: an emulator 1201 and an image output device 1202;
  • the simulator 1201 is configured to obtain the control signal of the movable platform output by the control system, and based on the control signal, simulate the movement of the movable platform in the scene model, and obtain the difference between the movable platform and the movable platform. relative poses between the scene elements, and generate an observable scene image based on the relative poses;
  • the image output device 1202 is configured to acquire the scene image generated by the simulator 1201, and output a plurality of scene images according to the acquired scene image, and the plurality of scene images include when the movable platform moves in the scene model. at least two scene images observed by the visual sensors, so that the control system generates corresponding control signals according to the plurality of scene images.
  • the simulator 1201 when the simulator 1201 generates an observable scene image based on the relative pose, it is specifically used for:
  • the plurality of scene images are generated based on relative poses between the movable platform and the scene elements.
  • the simulator 1201 when the simulator 1201 generates multiple scene images according to the relative pose, it is specifically used for:
  • an imaging change of the scene element observable by the movable platform is determined, and a plurality of scene images are generated based on the imaging change.
  • the simulator 1201 when the simulator 1201 generates multiple scene images according to the relative pose, it is specifically used for:
  • the plurality of scene images are generated according to the relative pose and stereo vision parameters between the movable platform and the scene elements.
  • the stereo vision parameter is determined by the installation pose of the at least two vision sensors and/or the relative pose between the at least two vision sensors.
  • the image output device 1202 includes an image output interface, and the emulator 1201 is further used for:
  • the simulator 1201 when the simulator 1201 sends the multiple scene images to the image output interface, it is specifically used for:
  • the image output device 1202 includes a display device for displaying the plurality of scene images
  • the simulator 1201 is further configured to: send the plurality of scene images to a display device, so as to display the plurality of scene images through the display device, so that the control system captures the display according to at least two visual sensors The control signal corresponding to the image output obtained by the device;
  • the number of the visual sensors is the same as the number of displayed scene images; the at least two visual sensors are in one-to-one correspondence with the plurality of scene images, and the visual sensors are used to photograph the corresponding scene images.
  • the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used for Shoot the picture displayed by the corresponding display device;
  • the simulator 1201 sends the plurality of scene images to a display device to display the plurality of scene images through the display device, it is specifically used for:
  • Each scene image is sent to its corresponding display device for display, so as to display the plurality of scene images through at least two of the display devices.
  • the emulator 1201 is also used for:
  • image conversion is performed on at least part of the plurality of scene images according to the calibration parameter corresponding to the visual sensor.
  • the simulator 1201 when the simulator 1201 generates multiple scene images according to the relative pose, it is specifically used for:
  • a plurality of scene images are generated according to the relative poses between the movable platform and the scene elements, stereo vision parameters, and calibration parameters corresponding to the vision sensor.
  • the emulator 1201 is also used for:
  • the vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
  • the display device is a 3D display device, and the 3D display device displays the plurality of scene images in a 3D projection manner.
  • the system further includes: an optical system
  • the optical system is arranged between the display device and the visual sensor, and the optical system is used to perform optical conversion on the scene image output by the display device, so that the converted scene image matches the visual field of the visual sensor. field angle.
  • the scene image generated by the simulator 1201 is a single scene image
  • the image output device 1202 is specifically configured to output the multiple scene images according to the single scene image.
  • the emulator 1201 is also used for:
  • the sensing information is output, so that the control system determines a corresponding control signal according to the plurality of scene images and the sensing information.
  • the sensing information includes point cloud data corresponding to the scene element.
  • the emulator 1201 is also used for:
  • the control system is evaluated according to the operating state of the movable platform.
  • the movable platform is a vehicle
  • the control system is a driving control system applied to the vehicle.
  • the simulation testing system provided in this embodiment can be used to execute the simulation testing method shown in FIG. 2 , and the specific implementation principles and effects thereof can refer to the foregoing embodiments, which will not be repeated here.
  • FIG. 12B is a schematic structural diagram of another simulation testing system provided by an embodiment of the present application. As shown in FIG. 12B , the system may include: an emulator 1211 and an image output device 1222;
  • the simulator 1211 is used to obtain the control signal output by the control system to be tested, and determine the corresponding scene image according to the control signal and the virtual scene model; wherein, the virtual scene model includes a plurality of scene elements;
  • the image output device 1212 is configured to acquire the scene image, and output at least two scene images according to the scene image;
  • the virtual scene model includes a plurality of scene elements
  • the at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image The deviation is determined according to the relative pose between the visual sensors of the movable platform;
  • the control signal is a control signal for controlling the movable platform output by the control system according to the historically input at least two scene images and a preset control model;
  • the outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene elements is changed, observed by the vision sensor of the movable platform to the scene image of the scene element.
  • the scene image determined by the simulator 1211 includes the at least two scene images.
  • the emulator 1211 is specifically used for:
  • the control signal output by the control system to be tested is acquired, and at least two scene images are determined according to the control signal, the virtual scene model and the stereo vision parameters.
  • the simulator 1211 determines at least two scene images according to the control signal, the virtual scene model and the stereo vision parameters, it is specifically used for:
  • At least two scene images are determined according to the relative poses of the movable platform and the scene elements and stereo vision parameters.
  • the stereo vision parameter is determined by the installation pose of the vision sensor of the movable platform and/or the relative pose between the vision sensors of the movable platform.
  • the image output device 1212 includes an image output interface, and the emulator 1211 is further used for:
  • the simulator 1211 when the simulator 1211 sends the determined at least two scene images to the image output interface, it is specifically used for:
  • the image output device 1212 includes a display device for displaying the at least two scene images
  • the simulator 1211 is further configured to: send the at least two scene images to a display device, so as to display the at least two scene images through the display device, so that the control system captures the scene images according to the at least two visual sensors.
  • the image and the preset control model determine the corresponding control signal;
  • the number of the visual sensors is the same as the number of output scene images; the at least two visual sensors are in one-to-one correspondence with the at least two scene images, and the visual sensors are used to photograph the corresponding scene images.
  • the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used for Shoot the picture displayed by the corresponding display device;
  • the emulator 1211 sends the at least two scene images to a display device to display the at least two scene images through the display device, it is specifically used for:
  • Each scene image is sent to its corresponding display device for display, so as to display the at least two scene images through at least two of the display devices.
  • the emulator 1211 is also used for:
  • image conversion is performed on at least part of the at least two scene images according to the calibration parameters corresponding to the visual sensor.
  • the simulator 1211 determines the corresponding scene image according to the control signal and the virtual scene model, it is specifically used for:
  • At least two scene images are determined according to the control signal, the virtual scene model, the stereo vision parameters, and the calibration parameters corresponding to the vision sensor.
  • the emulator 1211 is also used for:
  • the vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
  • the display device is a 3D display device, and the 3D display device displays the at least two scene images in a 3D projection manner.
  • the system further includes: an optical system
  • the optical system is arranged between the display device and the visual sensor, and the optical system is used to perform optical conversion on the scene image output by the display device, so that the converted scene image matches the visual field of the visual sensor. field angle.
  • the scene image determined by the simulator 1211 is a single scene image
  • the image output device 1212 is specifically configured to output the at least two scene images according to the single scene image.
  • the emulator 1211 is also used for:
  • the sensing information is output, so that the control system determines a corresponding control signal according to the at least two scene images and the sensing information.
  • the sensing information includes point cloud data corresponding to the scene element; when the simulator 1211 determines the corresponding sensing information according to the control signal and the virtual scene model, Specifically for:
  • Point cloud data is determined according to the relative pose between the movable platform and the scene element.
  • the emulator 1211 is also used for:
  • the control system is evaluated according to the operating state of the movable platform.
  • control system is a driving control system applied to a vehicle
  • the movable platform is a vehicle
  • the simulation testing system provided in this embodiment can be used to execute the simulation testing methods described in the embodiments shown in FIG. 3A to FIG. 10 , and the specific implementation principles and effects can be referred to the foregoing embodiments, which will not be repeated here.
  • FIG. 13 is a schematic structural diagram of an emulator provided by an embodiment of the present application. As shown in FIG. 13 , the emulator includes: a memory 1301 and at least one processor 1302;
  • the memory 1301 stores computer-executed instructions
  • the at least one processor 1302 executes the computer-executable instructions stored in the memory 1301, so that the at least one processor 1302 executes the method described in any of the foregoing embodiments.
  • the above-mentioned memory 1301 may be independent or integrated with the processor 1302 .
  • the emulator may further include a bus for connecting the memory 1301 and the processor 1302.
  • An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the readable storage medium; when the computer program is executed, the method described in any of the foregoing embodiments is implemented.
  • An embodiment of the present application also provides a computer program product, including a computer program, which implements the method described in any of the foregoing embodiments when the computer program is executed by a processor.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the modules is only a logical function division. In actual implementation, there may be other division methods.
  • multiple modules may be combined or integrated. to another system, or some features can be ignored, or not implemented.
  • the above-mentioned integrated modules implemented in the form of software functional modules may be stored in a computer-readable storage medium.
  • the above-mentioned software function modules are stored in a storage medium, and include several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute part of the steps of the methods described in the various embodiments of the present application.
  • processor may be a central processing unit (Central Processing Unit, referred to as CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, referred to as DSP), application specific integrated circuit (Application Specific Integrated Circuit, Referred to as ASIC) and so on.
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the invention can be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the memory may include high-speed RAM memory, and may also include non-volatile storage NVM, such as at least one magnetic disk memory, and may also be a U disk, a removable hard disk, a read-only memory, a magnetic disk or an optical disk, and the like.
  • NVM non-volatile storage
  • An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium may be located in Application Specific Integrated Circuits (ASIC for short).
  • ASIC Application Specific Integrated Circuits
  • the processor and the storage medium may also exist in the electronic device or the host device as discrete components.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A simulation test method and system, a simulator, a storage medium, and a program product. The method comprises: obtaining a control signal for a movable platform output by a control system; on the basis of the control signal, simulating movement of the movable platform in a scene model to obtain a relative pose between the movable platform and a scene element; generating a plurality of scene images according to the relative pose, the plurality of scene images comprising scene images observed by at least two of vision sensors when the movable platform moves in the scene model; and outputting the plurality of scene images, so that the control system generates a corresponding control signal according to the plurality of scene images. Thus, actual image input of the control system can be simulated, a simulation test of the control system based on stereoscopic vision is implemented, the test efficiency is effectively improved, and the test cost is reduced.

Description

仿真测试方法、系统、仿真器、存储介质及程序产品Simulation test method, system, simulator, storage medium and program product 技术领域technical field
本申请实施例涉及视觉处理技术领域,尤其涉及一种仿真测试方法、系统、仿真器、存储介质及程序产品。The embodiments of the present application relate to the technical field of visual processing, and in particular, to a simulation testing method, system, simulator, storage medium, and program product.
背景技术Background technique
随着图像处理技术的不断进步,基于视觉图像的智能控制系统例如自动驾驶系统等也在不断发展。基于视觉图像的智能控制系统可以根据相机采集到的图像,通过预设的算法执行相应的操作,以实现自动驾驶等功能,为用户提供便利。With the continuous advancement of image processing technology, intelligent control systems based on visual images, such as automatic driving systems, are also constantly developing. The intelligent control system based on visual images can perform corresponding operations through preset algorithms according to the images collected by the camera, so as to realize functions such as automatic driving and provide convenience for users.
为了保证系统的性能,在系统的产品开发验证阶段,需要对系统进行测试。对于基于立体视觉的智能控制本申请系统来说,难以通过常规的视觉仿真技术实现对系统的测试,只能依赖实际道路测试,测试效率较低,且成本较高。In order to ensure the performance of the system, the system needs to be tested during the product development and verification stage of the system. For the intelligent control system of the present application based on stereo vision, it is difficult to realize the test of the system through the conventional visual simulation technology, and can only rely on the actual road test, the test efficiency is low, and the cost is high.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供一种仿真测试方法、系统、仿真器、存储介质及程序产品,用于实现对基于立体视觉的控制系统的测试。Embodiments of the present application provide a simulation testing method, system, simulator, storage medium and program product, which are used to implement testing of a control system based on stereo vision.
第一方面,本申请实施例提供一种仿真测试方法,所述方法应用于仿真测试系统;所述仿真测试系统用于基于虚拟的场景模型测试可移动平台的控制系统,所述场景模型包括多个场景元素;所述控制系统用于,基于至少两个视觉传感器观测到的场景图像,输出用于控制所述可移动平台的控制信号;所述方法包括:In the first aspect, an embodiment of the present application provides a simulation test method, which is applied to a simulation test system; the simulation test system is used to test a control system of a movable platform based on a virtual scene model, and the scene model includes multiple scene elements; the control system is configured to, based on scene images observed by at least two visual sensors, output a control signal for controlling the movable platform; the method includes:
获取所述控制系统输出的所述可移动平台的控制信号;acquiring the control signal of the movable platform output by the control system;
基于所述控制信号,模拟所述可移动平台在所述场景模型中运动,得到所述可移动平台与所述场景元素之间的相对位姿;Based on the control signal, simulate the movement of the movable platform in the scene model to obtain the relative pose between the movable platform and the scene element;
根据所述相对位姿生成多个场景图像,所述多个场景图像包括所述可移动平台在所述场景模型中运动时,至少两个所述视觉传感器观测到的场景图 像;generating a plurality of scene images according to the relative pose, the plurality of scene images including scene images observed by at least two of the vision sensors when the movable platform moves in the scene model;
输出所述多个场景图像,以使所述控制系统根据所述多个场景图像生成对应的控制信号。The plurality of scene images are output, so that the control system generates corresponding control signals according to the plurality of scene images.
第二方面,本申请实施例提供一种仿真测试方法,包括:In a second aspect, an embodiment of the present application provides a simulation testing method, including:
获取待测试的控制系统输出的控制信号,所述控制信号为所述控制系统根据历史输入的场景图像和预设的控制模型输出的用于控制可移动平台的控制信号;obtaining a control signal output by the control system to be tested, where the control signal is a control signal for controlling the movable platform output by the control system according to the historically input scene image and a preset control model;
根据所述控制信号以及虚拟的场景模型,输出至少两个场景图像;outputting at least two scene images according to the control signal and the virtual scene model;
其中,所述场景模型包括多个场景元素;Wherein, the scene model includes multiple scene elements;
所述至少两个场景图像包括第一场景图像和第二场景图像,所述第一场景图像包括第一像素区域,所述第二场景图像包括第二像素区域,所述第一像素区域和所述第二像素区域描述同一所述场景元素;所述第一像素区域在所述第一场景图像中的位置,与所述第二像素区域在所述第二场景图像中的位置之间的位置偏差,根据所述可移动平台的视觉传感器之间的相对位姿确定;The at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image The deviation is determined according to the relative pose between the visual sensors of the movable platform;
输出的多个所述场景图像用于表征所述可移动平台响应于所述控制信号运动,与所述场景元素之间的相对位姿改变后,所述可移动平台的所述视觉传感器所观测到的所述场景元素的场景图像。The outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene elements is changed, observed by the vision sensor of the movable platform to the scene image of the scene element.
第三方面,本申请实施例提供一种仿真测试系统,所述仿真测试系统用于基于虚拟的场景模型测试可移动平台的控制系统,所述场景模型包括多个场景元素;In a third aspect, an embodiment of the present application provides a simulation test system, where the simulation test system is used to test a control system of a movable platform based on a virtual scene model, where the scene model includes a plurality of scene elements;
所述控制系统用于,基于至少两个视觉传感器观测到的场景图像,输出用于控制所述可移动平台的控制信号;The control system is configured to output a control signal for controlling the movable platform based on scene images observed by at least two visual sensors;
所述仿真测试系统包括仿真器和图像输出设备;The simulation test system includes an emulator and an image output device;
所述仿真器用于获取所述控制系统输出的所述可移动平台的控制信号,基于所述控制信号,模拟所述可移动平台在所述场景模型中运动,得到所述可移动平台与所述场景元素之间的相对位姿,并生成基于所述相对位姿可观测到的场景图像;The simulator is used to obtain the control signal of the movable platform output by the control system, and based on the control signal, simulate the movement of the movable platform in the scene model, and obtain the relationship between the movable platform and the relative poses between scene elements, and generate an observable scene image based on the relative poses;
所述图像输出设备用于获取仿真器生成的场景图像,并根据获取到的场景图像输出多个场景图像,所述多个场景图像包括所述可移动平台在所述场 景模型中运动时至少两个所述视觉传感器观测到的场景图像,以使所述控制系统根据所述多个场景图像生成对应的控制信号。The image output device is configured to acquire the scene image generated by the simulator, and output a plurality of scene images according to the acquired scene image, and the plurality of scene images include at least two images when the movable platform moves in the scene model. scene images observed by the visual sensor, so that the control system generates corresponding control signals according to the plurality of scene images.
第四方面,本申请实施例提供一种仿真测试系统,包括:仿真器和图像输出设备;In a fourth aspect, an embodiment of the present application provides a simulation test system, including: a simulator and an image output device;
所述仿真器用于获取待测试的控制系统输出的控制信号,根据所述控制信号以及虚拟的场景模型,确定对应的场景图像;其中,所述虚拟的场景模型包括多个场景元素;The simulator is used to obtain the control signal output by the control system to be tested, and determine the corresponding scene image according to the control signal and the virtual scene model; wherein, the virtual scene model includes a plurality of scene elements;
所述图像输出设备用于获取所述场景图像,并根据所述场景图像输出至少两个场景图像;The image output device is configured to acquire the scene image, and output at least two scene images according to the scene image;
其中,所述虚拟的场景模型包括多个场景元素;Wherein, the virtual scene model includes a plurality of scene elements;
所述至少两个场景图像包括第一场景图像和第二场景图像,所述第一场景图像包括第一像素区域,所述第二场景图像包括第二像素区域,所述第一像素区域和所述第二像素区域描述同一所述场景元素;所述第一像素区域在所述第一场景图像中的位置,与所述第二像素区域在所述第二场景图像中的位置之间的位置偏差,根据可移动平台的视觉传感器之间的相对位姿确定;The at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image The deviation is determined according to the relative pose between the visual sensors of the movable platform;
所述控制信号为所述控制系统根据历史输入的至少两个场景图像和预设的控制模型输出的用于控制所述可移动平台的控制信号;The control signal is a control signal for controlling the movable platform output by the control system according to the historically input at least two scene images and a preset control model;
输出的多个所述场景图像用于表征所述可移动平台响应于所述控制信号运动,与所述场景元素之间的相对位姿改变后,所述可移动平台的所述视觉传感器所观测到的所述场景元素的场景图像。The outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene elements is changed, observed by the vision sensor of the movable platform to the scene image of the scene element.
第五方面,本申请实施例提供一种仿真器,包括:存储器和至少一个处理器;In a fifth aspect, an embodiment of the present application provides an emulator, including: a memory and at least one processor;
所述存储器存储计算机执行指令;the memory stores computer-executable instructions;
所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行第一方面或第二方面所述的方法。The at least one processor executes computer-executable instructions stored in the memory to cause the at least one processor to perform the method of the first aspect or the second aspect.
第六方面,本申请实施例提供一种计算机可读存储介质,所述可读存储介质上存储有计算机程序;所述计算机程序在被执行时,实现第一方面或第二方面所述的方法。In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored; when the computer program is executed, the method described in the first aspect or the second aspect is implemented .
第七方面,本申请实施例提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现第一方面或第二方面所述的方法。In a seventh aspect, an embodiment of the present application provides a computer program product, including a computer program, which implements the method described in the first aspect or the second aspect when the computer program is executed by a processor.
本申请实施例提供的仿真测试方法、系统、仿真器、存储介质及程序产品中,仿真测试系统用于基于虚拟的场景模型测试可移动平台的控制系统,所述场景模型包括多个场景元素,所述控制系统用于基于至少两个视觉传感器观测到的场景图像输出用于控制所述可移动平台的控制信号,其中,所述仿真测试系统具体可以获取所述控制系统输出的所述可移动平台的控制信号,基于所述控制信号,模拟所述可移动平台在所述场景模型中运动,得到所述可移动平台与所述场景元素之间的相对位姿,根据所述相对位姿生成多个场景图像,所述多张场景图像包括所述可移动平台在所述场景模型中运动时,至少两个所述视觉传感器观测到的场景图像,并输出所述多个场景图像,以使所述控制系统根据所述多个场景图像生成对应的控制信号,从而能够通过生成至少两个视觉传感器可观测的场景图像来模拟控制系统的实际图像输入,实现对基于立体视觉的控制系统的仿真测试,经济、高效地测试验证控制系统的基本功能,解决了依赖大量道路测试带来的不可靠性、场景测试类别不全及高投入等实际问题,有效提高了测试效率,降低了测试成本。In the simulation test method, system, simulator, storage medium and program product provided by the embodiments of the present application, the simulation test system is used to test the control system of the movable platform based on a virtual scene model, and the scene model includes a plurality of scene elements, The control system is configured to output a control signal for controlling the movable platform based on scene images observed by at least two visual sensors, wherein the simulation testing system can specifically acquire the movable platform output by the control system. The control signal of the platform, based on the control signal, simulate the movement of the movable platform in the scene model, obtain the relative pose between the movable platform and the scene element, and generate the relative pose according to the relative pose A plurality of scene images, the plurality of scene images include scene images observed by at least two of the visual sensors when the movable platform moves in the scene model, and the plurality of scene images are output, so that The control system generates corresponding control signals according to the plurality of scene images, so that the actual image input of the control system can be simulated by generating scene images observable by at least two vision sensors, so as to realize the simulation of the control system based on stereo vision Test, test and verify the basic functions of the control system economically and efficiently, solve practical problems such as unreliability, incomplete scene test categories and high investment caused by relying on a large number of road tests, effectively improve test efficiency and reduce test costs.
附图说明Description of drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following will briefly introduce the accompanying drawings used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained based on these drawings without any creative effort.
图1为本申请实施例提供的一种应用场景示意图;1 is a schematic diagram of an application scenario provided by an embodiment of the present application;
图2为本申请实施例提供的一种仿真测试方法的流程示意图;2 is a schematic flowchart of a simulation testing method provided in an embodiment of the present application;
图3A为本申请实施例提供的另一种仿真测试方法的流程示意图;3A is a schematic flowchart of another simulation testing method provided by an embodiment of the present application;
图3B为本申请实施例提供的一种场景图像中场景元素的位置示意图;3B is a schematic diagram of the positions of scene elements in a scene image according to an embodiment of the present application;
图4为本申请实施例提供的又一种仿真测试方法的流程示意图;4 is a schematic flowchart of another simulation testing method provided by an embodiment of the present application;
图5为本申请实施例提供的一种基于多个显示设备的测试架构示意图;5 is a schematic diagram of a test architecture based on multiple display devices provided by an embodiment of the present application;
图6为本申请实施例提供的一种标定图案的示意图;6 is a schematic diagram of a calibration pattern provided by an embodiment of the present application;
图7为本申请实施例提供的一种基于光学系统的测试架构示意图;7 is a schematic diagram of a test architecture based on an optical system provided by an embodiment of the present application;
图8为本申请实施例提供的一种基于3D显示设备的测试架构示意图;FIG. 8 is a schematic diagram of a test architecture based on a 3D display device provided by an embodiment of the present application;
图9为本申请实施例提供的一种基于图像输出接口的测试架构示意图;9 is a schematic diagram of a test architecture based on an image output interface provided by an embodiment of the present application;
图10为本申请实施例提供的一种基于仿真器直连的测试架构示意图;10 is a schematic diagram of a test architecture based on a direct connection of an emulator provided by an embodiment of the present application;
图11A为本申请实施例提供的一种仿真测试装置的结构示意图;FIG. 11A is a schematic structural diagram of a simulation testing device provided by an embodiment of the present application;
图11B为本申请实施例提供的另一种仿真测试装置的结构示意图;FIG. 11B is a schematic structural diagram of another simulation testing device provided by an embodiment of the present application;
图12A为本申请实施例提供的一种仿真测试系统的结构示意图;12A is a schematic structural diagram of a simulation test system provided by an embodiment of the application;
图12B为本申请实施例提供的另一种仿真测试系统的结构示意图;12B is a schematic structural diagram of another simulation test system provided by an embodiment of the present application;
图13为本申请实施例提供的一种仿真器的结构示意图。FIG. 13 is a schematic structural diagram of an emulator provided by an embodiment of the present application.
具体实施方式Detailed ways
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments It is a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present application.
本申请实施例提供的方案,可以应用于对可移动平台的控制系统进行仿真测试。所述可移动平台可以是能够自主移动的任意设备,例如车辆、轮船、航空器、智能机器人等。所述控制系统可以是基于视觉图像的控制系统,所述控制系统能够通过可移动平台的视觉传感器获取周围环境图像,并根据环境图像输出对应的控制信号,以实现对可移动平台的控制。The solutions provided by the embodiments of the present application can be applied to simulation testing of the control system of the movable platform. The movable platform can be any device that can move autonomously, such as a vehicle, a ship, an aircraft, an intelligent robot, and the like. The control system may be a control system based on visual images, and the control system can obtain images of the surrounding environment through the visual sensor of the movable platform, and output corresponding control signals according to the environment images, so as to realize the control of the movable platform.
在实际应用中,对于基于视觉图像的控制系统来说,仿真测试验证往往是必不可少的。以车辆的驾驶控制系统为例,为了保证系统的性能,在系统的产品开发验证阶段,需要对系统进行仿真测试及实际道路测试。在仿真测试中,可以使用相机作为图像输入,来测试系统在模拟环境中的性能。In practical applications, simulation test verification is often essential for control systems based on visual images. Taking the driving control system of the vehicle as an example, in order to ensure the performance of the system, in the stage of product development and verification of the system, it is necessary to carry out simulation tests and actual road tests on the system. In a simulation test, a camera can be used as an image input to test the performance of the system in a simulated environment.
图1为本申请实施例提供的一种应用场景示意图。如图1所示,驾驶控制系统以及与之连接的相机构成被测试系统,仿真器(Simulator)可以输出道路视频给显示设备,由显示设备进行显示,相机采集到显示设备显示的道路视频后,自动驾驶系统可以根据相机采集到的道路视频输出驾驶控制信号,仿真器根据驾驶控制信号进一步调整输出的道路视频。FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application. As shown in Figure 1, the driving control system and the camera connected to it constitute the tested system. The simulator can output road video to the display device, and the display device displays it. After the camera captures the road video displayed by the display device, The automatic driving system can output driving control signals according to the road video collected by the camera, and the simulator further adjusts the output road video according to the driving control signals.
这样,仿真测试可以将道路视频播放给相机“观看”,让驾驶控制系统“认为”自己在实际道路上行驶,进而做出和实际道路测试一样的运算及输 出对应驾驶控制信号给到仿真器,仿真器再通过驾驶控制系统输出的控制信号来确定下一帧道路图像并输出,从而实现对测试验证过程。In this way, the simulation test can play the road video to the camera for "viewing", so that the driving control system "thinks" that it is driving on the actual road, and then makes the same calculation as the actual road test and outputs the corresponding driving control signal to the simulator. The simulator then determines the next frame of road image through the control signal output by the driving control system and outputs it, so as to realize the test verification process.
以上的测试方法一般被称为硬件在环实验(Hardware In Loop,HIL),这种实验完整地测试了整套系统(包括系统的硬件及软件部分),在汽车软件过程改进及能力评定(Automotive Software Process Improvement and Capacity Determination,A-SPICE)里面描述的V字模型(V-module)里属于系统鉴定测试(System Qualification Test)层次,是系统验证过程中必不可少的一环。The above test methods are generally called Hardware In Loop (HIL) experiments, which completely test the entire system (including the hardware and software parts of the system), in the process of automotive software process improvement and capability evaluation (Automotive Software The V-module described in Process Improvement and Capacity Determination (A-SPICE) belongs to the System Qualification Test level and is an indispensable part of the system verification process.
通过上述方法可以实现硬件在环的模拟仿真测试,使得使用视觉技术开发的产品做功能验证的时候不需要到实际使用场景中进行测试。但是,基于单相机的驾驶控制系统不存在立体深度问题,因此可以通过单一显示设备实现仿真测试,而对于依赖立体视觉系统作为观测输入源的驾驶控制系统,通过单一显示设备的方式,获取到的图像经过两个相机输入到系统后,无法通过立体匹配的方式重建视频流描述的立体场景,因为通过两个相机获取到的图像是一个平面,不是一个实际的立体场景,原理上深度信息丢失,造成无法二次重建立体信息。因此,基于立体视觉的驾驶控制系统难以通过常规的视觉仿真技术进行测试,只能依赖实际道路测试,测试效率较低,且成本较高。Through the above method, the simulation test of hardware-in-the-loop can be realized, so that the product developed by using the vision technology does not need to be tested in the actual use scene when performing functional verification. However, the driving control system based on a single camera does not have the problem of stereo depth, so the simulation test can be realized through a single display device. For the driving control system that relies on the stereo vision system as the observation input source, the obtained data can be obtained by means of a single display device. After the image is input to the system through two cameras, the stereoscopic scene described by the video stream cannot be reconstructed by stereo matching, because the image obtained by the two cameras is a plane, not an actual stereoscopic scene, and the depth information is lost in principle. As a result, the stereo information cannot be reconstructed twice. Therefore, the driving control system based on stereo vision is difficult to be tested by conventional visual simulation technology, and can only rely on the actual road test, which has low test efficiency and high cost.
有鉴于此,本申请实施例提供一种仿真测试方法,可以获取待测试的驾驶控制系统输出的驾驶控制信号,根据所述驾驶控制信号以及预设的虚拟场景模型,模拟车辆在虚拟场景模型中的运动,并生成车辆的多个视觉传感器可观测到的场景图像,从而所述驾驶控制系统可以根据生成的多个场景图像构建立体信息并输出对应的驾驶控制信号,实现对基于立体视觉的驾驶控制系统的仿真测试,从而在仿真测试中经济高效地测试验证驾驶控制系统的大部分基本功能,解决了依赖大量道路测试带来的不可靠性、场景测试类别不全及高投入等实际问题,有效提高了测试效率,降低了测试成本。In view of this, the embodiment of the present application provides a simulation test method, which can obtain the driving control signal output by the driving control system to be tested, and simulate the vehicle in the virtual scene model according to the driving control signal and the preset virtual scene model. and generate scene images that can be observed by multiple visual sensors of the vehicle, so that the driving control system can construct stereoscopic information according to the generated multiple scene images and output corresponding driving control signals, so as to realize driving based on stereo vision. Simulation test of the control system, so as to cost-effectively test and verify most of the basic functions of the driving control system in the simulation test. Improve test efficiency and reduce test cost.
下面结合附图,对本申请的一些实施方式作详细说明。在各实施例之间不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The following embodiments and features in the embodiments may be combined with each other without conflict between the embodiments.
图2为本申请实施例提供的一种仿真测试方法的流程示意图。本实施例 中的方法可以应用于仿真测试系统。所述仿真测试系统可以用于基于虚拟的场景模型测试可移动平台的控制系统,所述场景模型包括多个场景元素。所述控制系统可以用于,基于至少两个视觉传感器观测到的场景图像,输出用于控制所述可移动平台的控制信号。FIG. 2 is a schematic flowchart of a simulation testing method provided by an embodiment of the present application. The method in this embodiment can be applied to a simulation test system. The simulation test system can be used to test the control system of the movable platform based on a virtual scene model, and the scene model includes a plurality of scene elements. The control system may be configured to output control signals for controlling the movable platform based on images of the scene observed by at least two vision sensors.
所述仿真测试系统可以包括仿真器,由仿真器执行下述方法,也可以包括仿真器及其它器件,由仿真器及其它器件共同完成下述方法。如图2所示,所述方法可以包括:The simulation testing system may include an emulator, and the emulator executes the following method, or may include an emulator and other devices, and the emulator and other devices jointly complete the following method. As shown in Figure 2, the method may include:
步骤201、获取所述控制系统输出的所述可移动平台的控制信号。Step 201: Acquire the control signal of the movable platform output by the control system.
本申请实施例可以应用于对任意类型的可移动平台的控制系统进行仿真测试。为了便于描述,本实施例中以对车辆的驾驶控制系统进行测试为例进行说明。可以理解的是,该方法也适用于对其它可移动平台的控制系统进行测试。The embodiments of the present application can be applied to the simulation test of the control system of any type of movable platform. For ease of description, this embodiment is described by taking the test of the driving control system of the vehicle as an example. It can be understood that the method is also suitable for testing control systems of other movable platforms.
相应的,本步骤的控制系统可以为应用于车辆的驾驶控制系统,输出的控制信号可以为驾驶控制信号,所述可移动平台可以为车辆。所述驾驶控制系统可以基于软件、硬件或者软硬件结合的方式来实现。在本步骤中,可以获取待测试的驾驶控制系统输出的驾驶控制信号,所述驾驶控制信号为所述驾驶控制系统根据历史输入的场景图像和预设的驾驶控制模型输出的用于控制车辆的驾驶控制信号。其中,所述驾驶控制系统可以为自动驾驶系统(Autopilot System)、自动驾驶辅助系统(Automatic Driving Assistance System,ADAS)或者任意能够实现驾驶控制的系统。Correspondingly, the control system in this step may be a driving control system applied to a vehicle, the output control signal may be a driving control signal, and the movable platform may be a vehicle. The driving control system can be implemented based on software, hardware or a combination of software and hardware. In this step, the driving control signal output by the driving control system to be tested may be obtained, and the driving control signal is the driving control signal output by the driving control system according to the historically input scene image and the preset driving control model for controlling the vehicle. driving control signal. Wherein, the driving control system may be an automatic driving system (Autopilot System), an automatic driving assistance system (Automatic Driving Assistance System, ADAS) or any system capable of realizing driving control.
具体来说,车辆可以设置有多个视觉传感器,所述多个视觉传感器可以设置在车辆的不同位置,例如,可以设置在车辆前挡风的不同位置,通过多个不同位置的视觉传感器可以采集到不同的画面,从而通过多个画面可以构建周围环境的深度信息,以更好地进行驾驶控制。Specifically, the vehicle may be provided with multiple visual sensors, and the multiple visual sensors may be set at different positions of the vehicle, for example, may be set at different positions on the front windshield of the vehicle, and the multiple visual sensors at different positions may collect to different pictures, so that the depth information of the surrounding environment can be constructed through multiple pictures for better driving control.
所述驾驶控制模型可以是预先设置好的驾驶控制模型,所述模型的输入可以包括多个视觉传感器分别采集到的图像,输出可以包括对应的驾驶控制信号。所述驾驶控制信号可以用于对车辆进行控制,可选的,所述驾驶控制信号可以用于对车辆的速度、方向、刹车等各方面进行控制。The driving control model may be a preset driving control model, the input of the model may include images respectively collected by a plurality of visual sensors, and the output may include corresponding driving control signals. The driving control signal may be used to control the vehicle. Optionally, the driving control signal may be used to control the speed, direction, braking and other aspects of the vehicle.
所述驾驶控制信号基于多个视觉传感器采集到的图像生成,举例来说,当根据多个视觉传感器采集到的图像确定前方有障碍物时,输出的驾驶控制 信号可以为:刹车;当根据多个视觉传感器采集到的图像确定前方路面平坦、无障碍物时,输出的驾驶控制信号可以为:以加速度a行驶;等等。The driving control signal is generated based on images collected by multiple visual sensors. For example, when it is determined that there is an obstacle ahead according to the images collected by multiple visual sensors, the output driving control signal may be: braking; When the image collected by each visual sensor determines that the road ahead is flat and free of obstacles, the output driving control signal can be: driving at acceleration a; and so on.
可选的,所述驾驶控制信号也可以在多个视觉传感器采集到的图像的基础上,结合其他信息如路径规划、道路管制信息等综合确定的。例如,当根据多个视觉传感器采集到的图像确定前方红绿灯当前处于绿灯状态,且根据路径规划当前需要左转,则输出的驾驶控制信号可以为:向左转弯。Optionally, the driving control signal may also be comprehensively determined on the basis of images collected by multiple visual sensors and in combination with other information such as route planning, road control information, and the like. For example, when it is determined according to images collected by multiple visual sensors that the traffic light ahead is currently in a green state, and a left turn is currently required according to the path planning, the output driving control signal may be: turn left.
在本申请实施例中,所述可移动平台可以是实体的可移动平台,也可以是虚拟的可移动平台,所述视觉传感器可以是实体的视觉传感器,也可以是虚拟的视觉传感器。In this embodiment of the present application, the movable platform may be a physical movable platform or a virtual movable platform, and the visual sensor may be a physical visual sensor or a virtual visual sensor.
例如,在仿真测试阶段,并不需要把驾驶控制系统直接安装到实际车辆中进行整车测试,而是由仿真器根据驾驶控制系统输出的信号进行仿真测试,相当于由驾驶控制系统控制着虚拟车辆。For example, in the simulation test stage, it is not necessary to install the driving control system directly into the actual vehicle for vehicle testing. vehicle.
可选的,所述驾驶控制系统可以与仿真器相连,仿真器可以输出场景图像给驾驶控制系统,驾驶控制系统可以根据历史输入的场景图像和预设的驾驶控制模型输出驾驶控制信号给仿真器,再由仿真器根据驾驶控制信号更新场景图像并输出给驾驶控制系统。Optionally, the driving control system may be connected to a simulator, the simulator may output scene images to the driving control system, and the driving control system may output driving control signals to the simulator according to historically input scene images and a preset driving control model. , and then the simulator updates the scene image according to the driving control signal and outputs it to the driving control system.
具体地,仿真器可以根据驾驶控制信号实时更新车辆在虚拟场景中的位置,进而更新驾驶控制系统所能“观看”到的场景图像。当车辆前进或者后退时,场景图像也会相应发生变化,类似于赛车游戏,相当于驾驶控制系统控制着车辆在虚拟的场景中运行。Specifically, the simulator can update the position of the vehicle in the virtual scene in real time according to the driving control signal, and then update the scene image that the driving control system can "view". When the vehicle moves forward or backward, the scene image will also change accordingly, similar to a racing game, which is equivalent to the driving control system controlling the vehicle to run in a virtual scene.
步骤202、基于所述控制信号,模拟所述可移动平台在所述场景模型中运动,得到所述可移动平台与所述场景元素之间的相对位姿。Step 202: Based on the control signal, simulate the movement of the movable platform in the scene model to obtain the relative pose between the movable platform and the scene element.
其中,所述场景元素可以为场景模型中的任意元素,例如,在进行道路模拟测试时,所述场景元素可以包括车道线、路障、树木、行人、交通信号灯等等,用于模拟实际的道路环境。Wherein, the scene element can be any element in the scene model, for example, during the road simulation test, the scene element can include lane lines, roadblocks, trees, pedestrians, traffic lights, etc., for simulating the actual road surroundings.
所述相对位姿可以包括所述可移动平台相对于所述场景模型中的一个或多个场景元素的位置和/或角度信息。The relative pose may include position and/or angle information of the movable platform relative to one or more scene elements in the scene model.
步骤203、根据所述相对位姿生成多个场景图像,所述多个场景图像包括所述可移动平台在所述场景模型中运动时,至少两个所述视觉传感器观测到的场景图像。Step 203: Generate a plurality of scene images according to the relative poses, where the plurality of scene images include scene images observed by at least two of the visual sensors when the movable platform moves in the scene model.
其中,至少两个视觉传感器构成多目视觉传感器,能够采集周围环境的图像。本申请实施例不限制视觉传感器的数量和位姿,传感器的数量可以有两个或者更多个,多个传感器可以是左右设置的,也可以是上下设置的,也可以以其它任意方式设置。Among them, at least two vision sensors constitute a multi-eye vision sensor, which can collect images of the surrounding environment. The embodiments of the present application do not limit the number and pose of the visual sensors, the number of sensors may be two or more, and the multiple sensors may be arranged left and right, up and down, or in any other manner.
根据车辆与至少一个场景元素的相对位姿,可以确定安装在所述车辆上的视觉传感器的相对于周围场景元素的位置、角度等信息,进而可以确定所述视觉传感器观测到的场景图像。According to the relative pose of the vehicle and at least one scene element, the position, angle and other information of the visual sensor installed on the vehicle relative to the surrounding scene elements can be determined, and then the scene image observed by the visual sensor can be determined.
可选的,所述多个场景图像的数量可以与视觉传感器的数量一致,例如,控制系统使用车辆的左右两个视觉传感器采集到的图像作为输入,则本步骤中生成的多个场景图像可以包括左侧视觉传感器可观测到的场景图像以及右侧视觉传感器可观测到的场景图像。Optionally, the number of the multiple scene images may be consistent with the number of visual sensors. For example, if the control system uses the images collected by the left and right visual sensors of the vehicle as input, the multiple scene images generated in this step may be It includes the scene image observable by the left vision sensor and the scene image observable by the right vision sensor.
步骤204、输出所述多个场景图像,以使所述控制系统根据所述多个场景图像生成对应的控制信号。Step 204: Output the multiple scene images, so that the control system generates corresponding control signals according to the multiple scene images.
例如,仿真器可以根据从驾驶控制系统获取到的驾驶控制信号输出对应的多个场景图像,输出的多个场景图像可以用于模拟车辆的多个视觉传感器拍摄到的有视差的图像。具体的,所述场景图像可以是车辆响应于所述控制信号,在场景模型中运动后所能观测到的场景图像。随着车辆的不断向前行驶,仿真器可以输出具有视差的视频流。这样可以形成一个不断控制并更新场景图像的过程,从而实现对驾驶控制系统的测试。For example, the simulator can output corresponding multiple scene images according to the driving control signals obtained from the driving control system, and the output multiple scene images can be used to simulate images with parallax captured by multiple visual sensors of the vehicle. Specifically, the scene image may be a scene image that can be observed after the vehicle moves in the scene model in response to the control signal. The simulator can output a video stream with parallax as the vehicle continues to move forward. This creates a process of constantly controlling and updating the scene image, enabling testing of the driving control system.
可选的,场景图像也可以不是由仿真器直接输出到控制系统的,而是由其它设备输入到控制系统,本实施例对此不作限制。Optionally, the scene image may not be directly output to the control system by the emulator, but input to the control system by other devices, which is not limited in this embodiment.
以上以对车辆的驾驶控制系统进行测试为例对本实施例的方案进行了说明。在此基础上,也可以将车辆替换为其它可移动平台,将驾驶控制系统替换为其它控制系统,从而实现对基于立体视觉的其它控制系统的测试。The solution of this embodiment has been described above by taking the test of the driving control system of the vehicle as an example. On this basis, the vehicle can also be replaced with other movable platforms, and the driving control system can be replaced with other control systems, so as to realize the test of other control systems based on stereo vision.
一个示例中,所述可移动平台可以为轮船,所述场景模型可以为轮船实际工作场景对应的模型,例如,可以为河流对应的模型。所述轮船可以安装有控制系统以及多个视觉传感器,通过本实施例中所述的方法可以实现对轮船的控制系统的测试,从而辅助实现轮船的自动驾驶测试验证。In an example, the movable platform may be a ship, and the scene model may be a model corresponding to an actual working scene of the ship, for example, a model corresponding to a river. The ship may be installed with a control system and a plurality of visual sensors, and the method described in this embodiment can implement the test of the control system of the ship, thereby assisting in the realization of the automatic driving test verification of the ship.
另一示例中,所述可移动平台可以为智能机器人,所述场景模型可以为机器人实际工作场景对应的模型,例如,可以为商场或仓库对应的模型。所 述智能机器人可以安装有控制系统以及多个视觉传感器,通过本实施例中所述的方法可以实现对智能机器人的控制系统的测试,从而辅助实现智能机器人的自主移动测试验证。In another example, the movable platform may be an intelligent robot, and the scene model may be a model corresponding to an actual working scene of the robot, for example, a model corresponding to a shopping mall or a warehouse. The intelligent robot can be equipped with a control system and a plurality of visual sensors, and the method described in this embodiment can realize the test of the control system of the intelligent robot, thereby assisting in realizing the autonomous movement test verification of the intelligent robot.
本实施例提供的仿真测试方法,可以应用于仿真测试系统,所述仿真测试系统用于基于虚拟的场景模型测试可移动平台的控制系统,所述场景模型包括多个场景元素,所述控制系统用于基于至少两个视觉传感器观测到的场景图像输出用于控制所述可移动平台的控制信号,其中,所述仿真测试系统具体可以获取所述控制系统输出的所述可移动平台的控制信号,基于所述控制信号,模拟所述可移动平台在所述场景模型中运动,得到所述可移动平台与所述场景元素之间的相对位姿,根据所述相对位姿生成多个场景图像,所述多张场景图像包括所述可移动平台在所述场景模型中运动时,至少两个所述视觉传感器观测到的场景图像,并输出所述多个场景图像,以使所述控制系统根据所述多个场景图像生成对应的控制信号,从而能够通过生成至少两个视觉传感器可观测的场景图像来模拟控制系统的实际图像输入,实现对基于立体视觉的控制系统的仿真测试,经济、高效地测试验证控制系统的基本功能,解决了依赖大量道路测试带来的不可靠性、场景测试类别不全及高投入等实际问题,有效提高了测试效率,降低了测试成本。The simulation test method provided in this embodiment can be applied to a simulation test system. The simulation test system is used to test a control system of a movable platform based on a virtual scene model. The scene model includes a plurality of scene elements, and the control system for outputting a control signal for controlling the movable platform based on scene images observed by at least two visual sensors, wherein the simulation test system can specifically acquire the control signal of the movable platform output by the control system , based on the control signal, simulate the movement of the movable platform in the scene model, obtain the relative pose between the movable platform and the scene element, and generate a plurality of scene images according to the relative pose , the plurality of scene images include scene images observed by at least two of the visual sensors when the movable platform moves in the scene model, and output the plurality of scene images, so that the control system Corresponding control signals are generated according to the plurality of scene images, so that the actual image input of the control system can be simulated by generating scene images observable by at least two vision sensors, and the simulation test of the stereo vision-based control system can be realized. Efficiently test and verify the basic functions of the control system, solve practical problems such as unreliability, incomplete scene test categories and high investment caused by relying on a large number of road tests, effectively improve test efficiency and reduce test costs.
在上述实施例提供的技术方案的基础上,可选的是,根据所述相对位姿生成多个场景图像,可以包括:On the basis of the technical solutions provided in the foregoing embodiments, optionally, generating a plurality of scene images according to the relative poses may include:
根据所述可移动平台与所述场景元素之间的相对位姿,确定所述可移动平台可观测到的场景元素的成像变化,并基于所述成像变化生成多个场景图像。According to the relative pose between the movable platform and the scene element, an imaging change of the scene element observable by the movable platform is determined, and a plurality of scene images are generated based on the imaging change.
其中,所述相对位姿可以包括所述可移动平台相对于所述场景模型中的一个或多个场景元素的位置和/或角度信息。根据所述相对位姿,可以确定所述可移动平台可观测到的场景元素。所述场景元素在所述可移动平台搭载的视觉传感器上的成像信息,可以基于所述相对位姿确定。The relative pose may include position and/or angle information of the movable platform relative to one or more scene elements in the scene model. Based on the relative pose, scene elements that can be observed by the movable platform can be determined. The imaging information of the scene element on the vision sensor mounted on the movable platform may be determined based on the relative pose.
当所述相对位姿发生变化时,所述场景元素的成像也会发生变化,基于这种成像变化,可以确定相对位姿改变后所述视觉传感器观测到的场景元素的成像信息,从而生成对应的场景图像。When the relative pose changes, the imaging of the scene element will also change. Based on this imaging change, the imaging information of the scene element observed by the vision sensor after the relative pose is changed can be determined, so as to generate a corresponding image. scene image.
通过所述相对位姿确定所述可移动平台可观测的场景元素的成像变化,并基于所述成像变化生成多个场景图像,能够准确地构建出可移动平台可观测到的场景图像,提高仿真测试的效率和准确性。The imaging changes of the scene elements that can be observed by the movable platform are determined by the relative pose, and multiple scene images are generated based on the imaging changes, so that the scene images that can be observed by the movable platform can be accurately constructed, and the simulation is improved. Efficiency and accuracy of testing.
在一种可选的实现方式中,在根据所述相对位姿生成多个场景图像时,可以具体根据所述可移动平台与所述场景元素之间的相对位姿以及立体视觉参数,生成所述多个场景图像。In an optional implementation manner, when generating a plurality of scene images according to the relative pose, all the scene images may be generated specifically according to the relative pose and stereo vision parameters between the movable platform and the scene element. describe multiple scene images.
其中,所述立体视觉参数可以包括但不限于视觉传感器的高度、拍摄角度、基线参数等。Wherein, the stereo vision parameters may include, but are not limited to, the height of the vision sensor, the shooting angle, the baseline parameters, and the like.
根据可移动平台与至少一个场景元素的相对位姿以及立体视觉参数,可以确定安装在所述可移动平台上的视觉传感器相对于周围场景元素的位置、角度等信息,进而可以确定所述视觉传感器观测到的场景图像,从而可以更加精准地生成各个视觉传感器可观测到的场景图像,进一步提高仿真测试的准确性。According to the relative pose and stereo vision parameters of the movable platform and at least one scene element, information such as the position and angle of the vision sensor installed on the movable platform relative to the surrounding scene elements can be determined, and then the vision sensor can be determined. The observed scene image, so that the scene image that can be observed by each vision sensor can be generated more accurately, and the accuracy of the simulation test can be further improved.
在一种可选的实现方式中,所述立体视觉参数由所述至少两个视觉传感器的安装位姿和/或所述至少两个视觉传感器之间的相对位姿确定。In an optional implementation manner, the stereo vision parameter is determined by the installation pose of the at least two vision sensors and/or the relative pose between the at least two vision sensors.
在一种可选的实现方式中,输出所述多个场景图像,包括:In an optional implementation manner, outputting the plurality of scene images includes:
将所述多个场景图像通过图像输出接口发送给所述控制系统。Sending the plurality of scene images to the control system through an image output interface.
在一种可选的实现方式中,将所述多个场景图像通过图像输出接口发送给所述控制系统,包括:In an optional implementation manner, sending the plurality of scene images to the control system through an image output interface includes:
将所述多个场景图像转换为预设格式的多个场景图像;converting the plurality of scene images into a plurality of scene images in a preset format;
将所述预设格式的多个场景图像通过图像输出接口发送给所述控制系统。Sending the plurality of scene images in the preset format to the control system through an image output interface.
在一种可选的实现方式中,输出所述多个场景图像,以使所述控制系统根据所述多个场景图像生成对应的控制信号,包括:In an optional implementation manner, outputting the multiple scene images, so that the control system generates corresponding control signals according to the multiple scene images, includes:
将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像,使得所述控制系统基于所述至少两个视觉传感器拍摄所述显示设备得到的图像输出对应的控制信号;Send the plurality of scene images to a display device to display the plurality of scene images through the display device, so that the control system outputs corresponding images based on the at least two visual sensors photographing the display device. control signal;
其中,所述视觉传感器的数量和显示的场景图像的数量相同;所述至少两个视觉传感器与所述多个场景图像一一对应,所述视觉传感器用于对对应的场景图像进行拍摄。The number of the visual sensors is the same as the number of displayed scene images; the at least two visual sensors are in one-to-one correspondence with the plurality of scene images, and the visual sensors are used to photograph the corresponding scene images.
在一种可选的实现方式中,所述显示设备的数量与所述视觉传感器的数 量相同,至少两个所述显示设备与所述至少两个视觉传感器一一对应,每一视觉传感器用于对对应的显示设备显示的画面进行拍摄;In an optional implementation manner, the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used for Shoot the picture displayed by the corresponding display device;
相应的,将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像,包括:Correspondingly, sending the plurality of scene images to a display device, so as to display the plurality of scene images through the display device, includes:
将每一场景图像发送给与其对应的显示设备进行显示,以通过至少两个所述显示设备显示所述多个场景图像。Each scene image is sent to its corresponding display device for display, so as to display the plurality of scene images through at least two of the display devices.
在一种可选的实现方式中,在将每一场景图像发送给与其对应的显示设备进行显示之前,还包括:In an optional implementation manner, before each scene image is sent to its corresponding display device for display, the method further includes:
根据所述视觉传感器对应的标定参数,对所述多个场景图像中的至少部分图像进行图像转换。Image conversion is performed on at least part of the multiple scene images according to the calibration parameters corresponding to the visual sensor.
在一种可选的实现方式中,根据所述相对位姿生成多个场景图像,包括:In an optional implementation manner, multiple scene images are generated according to the relative pose, including:
根据所述可移动平台与所述场景元素之间的相对位姿、立体视觉参数以及所述视觉传感器对应的标定参数,生成多个场景图像。A plurality of scene images are generated according to the relative poses between the movable platform and the scene elements, stereo vision parameters, and calibration parameters corresponding to the vision sensor.
在一种可选的实现方式中,所述方法还包括:In an optional implementation, the method further includes:
通过相机标定法对所述视觉传感器进行标定,以确定所述视觉传感器的标定参数。The vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
在一种可选的实现方式中,所述显示设备为3D显示设备;In an optional implementation manner, the display device is a 3D display device;
将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像,包括:Sending the plurality of scene images to a display device to display the plurality of scene images through the display device, including:
将所述多个场景图像发送给所述3D显示设备,以使所述3D显示设备通过3D投影方式显示所述多个场景图像。Sending the plurality of scene images to the 3D display device, so that the 3D display device displays the plurality of scene images through 3D projection.
在一种可选的实现方式中,所述方法还包括:In an optional implementation, the method further includes:
根据所述可移动平台与所述场景元素之间的相对位姿,确定对应的传感信息;Determine corresponding sensing information according to the relative pose between the movable platform and the scene element;
输出所述传感信息,以使所述控制系统根据所述多个场景图像以及所述传感信息确定对应的控制信号。The sensing information is output, so that the control system determines a corresponding control signal according to the plurality of scene images and the sensing information.
在一种可选的实现方式中,所述传感信息包括所述场景元素对应的点云数据。In an optional implementation manner, the sensing information includes point cloud data corresponding to the scene element.
在一种可选的实现方式中,所述方法还包括:In an optional implementation, the method further includes:
根据所述控制信号,确定所述可移动平台的运行状态;determining the operating state of the movable platform according to the control signal;
根据所述可移动平台的运行状态,对所述控制系统进行评价。The control system is evaluated according to the operating state of the movable platform.
在一种可选的实现方式中,所述可移动平台为车辆,所述控制系统为应用于车辆的驾驶控制系统。In an optional implementation manner, the movable platform is a vehicle, and the control system is a driving control system applied to the vehicle.
下面再以一些具体的实施例对本申请的技术方案进行详细说明。本申请给出的各个实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。在某些实施例中没有详细描述的内容,可以参照其它实施例中的相关描述。The technical solution of the present application will be described in detail below with some specific embodiments. The various embodiments given in this application may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. For content that is not described in detail in some embodiments, reference may be made to related descriptions in other embodiments.
图3A为本申请实施例提供的另一种仿真测试方法的流程示意图。本实施例中的方法可以由仿真器执行,也可以由仿真器和其它器件共同完成。如图3A所示,所述方法可以包括:FIG. 3A is a schematic flowchart of another simulation testing method provided by an embodiment of the present application. The method in this embodiment may be executed by an emulator, or may be implemented jointly by the emulator and other devices. As shown in Figure 3A, the method may include:
步骤301、获取待测试的控制系统输出的控制信号,所述控制信号为所述控制系统根据历史输入的场景图像和预设的控制模型输出的用于控制可移动平台的控制信号。Step 301: Acquire a control signal output by a control system to be tested, where the control signal is a control signal output by the control system according to historically input scene images and a preset control model for controlling the movable platform.
步骤302、根据所述控制信号以及虚拟的场景模型,输出至少两个场景图像。 Step 302 , output at least two scene images according to the control signal and the virtual scene model.
其中,所述场景模型包括多个场景元素。Wherein, the scene model includes multiple scene elements.
所述至少两个场景图像包括第一场景图像和第二场景图像,所述第一场景图像包括第一像素区域,所述第二场景图像包括第二像素区域,所述第一像素区域和所述第二像素区域描述同一所述场景元素;所述第一像素区域在所述第一场景图像中的位置,与所述第二像素区域在所述第二场景图像中的位置之间的位置偏差,根据所述可移动平台的视觉传感器之间的相对位姿确定。The at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image The deviation is determined according to the relative pose between the visual sensors of the movable platform.
输出的多个所述场景图像用于表征所述可移动平台响应于所述控制信号运动,与所述场景元素之间的相对位姿改变后,所述可移动平台的所述视觉传感器所观测到的所述场景元素的场景图像。The outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene elements is changed, observed by the vision sensor of the movable platform to the scene image of the scene element.
本实施例中,控制系统、仿真器、场景模型及场景元素等相关概念的解释以及各步骤的具体实现原理和过程,可以参见前述实施例,此处不再赘述。In this embodiment, the explanation of related concepts such as the control system, the simulator, the scene model, and the scene elements, as well as the specific implementation principles and processes of each step, can refer to the foregoing embodiments, and details are not repeated here.
本实施例中,输出的多个场景图像可以是具有视差的多个场景图像。图3B为本申请实施例提供的一种场景图像中场景元素的位置示意图。如图3B 所示,左右两幅图像为输出的两个场景图像,所述两个场景图像具有视差。In this embodiment, the outputted multiple scene images may be multiple scene images with parallax. FIG. 3B is a schematic diagram of positions of scene elements in a scene image according to an embodiment of the present application. As shown in FIG. 3B , the left and right images are two output scene images, and the two scene images have parallax.
具体来说,对于任意一个场景元素例如树木来说,不同的场景图像中,该树木的位置可以是不同的,这是因为人的两只眼睛具有一定的距离,左右两只眼睛看物体时是从不同角度看到,所以会导致物体在左右眼的视图中处于不同的位置,从而形成视差。而基于立体视觉的控制系统正是利用的这一原理,通过具有视差的多个图像构建周围环境的深度信息,从而更加精准地实现对车辆的控制。Specifically, for any scene element such as a tree, in different scene images, the position of the tree can be different, because the two eyes of a person have a certain distance, and the left and right eyes look at the object. Seeing from different angles will cause objects to be in different positions in the view of the left and right eyes, resulting in parallax. The control system based on stereo vision uses this principle to construct the depth information of the surrounding environment through multiple images with parallax, so as to control the vehicle more accurately.
在对驾驶控制系统进行测试时,可以向控制系统输出根据场景模型生成的至少两个场景图像,场景模型中的同一场景元素在不同的场景图像中可以位于不同的像素区域。When testing the driving control system, at least two scene images generated according to the scene model may be output to the control system, and the same scene element in the scene model may be located in different pixel areas in different scene images.
参见图3B,树木在第一场景图像中的位置与其在第二场景图像中的位置可以不同,具体的,所述树木在第一场景图像中可以位于第一像素区域,在第二场景图像中可以位于第二像素区域。所述第一像素区域在第一场景图像中的位置,与所述第二像素区域在第二场景图像中的位置,不是完全重合的,而是具有一定的偏差。Referring to FIG. 3B , the position of the tree in the first scene image may be different from its position in the second scene image. Specifically, the tree may be located in the first pixel area in the first scene image, and in the second scene image may be located in the second pixel area. The position of the first pixel area in the first scene image and the position of the second pixel area in the second scene image are not completely coincident, but have a certain deviation.
在对车辆的驾驶控制系统进行仿真测试的场景下,所述第一像素区域在所述第一场景图像中的位置,与所述第二像素区域在所述第二场景图像中的位置之间的位置偏差,可以根据车辆的视觉传感器之间的相对位姿确定。In the scenario where the driving control system of the vehicle is simulated and tested, between the position of the first pixel area in the first scene image and the position of the second pixel area in the second scene image The positional deviation can be determined according to the relative pose between the visual sensors of the vehicle.
所述车辆的至少两个视觉传感器的位置和拍摄角度越接近,同一场景元素在不同场景图像中的位置偏差越小;所述车辆的至少两个视觉传感器的位置和拍摄角度相差越大,同一场景元素在不同场景图像中的位置偏差也越大。The closer the positions and shooting angles of the at least two visual sensors of the vehicle are, the smaller the positional deviation of the same scene element in different scene images; the greater the difference between the positions and shooting angles of the at least two visual sensors of the vehicle, the same The positional deviation of scene elements in different scene images is also greater.
本实施例提供的仿真测试方法,可以获取待测试的控制系统输出的控制信号,所述控制信号为所述控制系统根据历史输入的场景图像和预设的控制模型输出的用于控制可移动平台的控制信号,并根据所述控制信号以及虚拟的场景模型,输出至少两个场景图像,其中,所述场景模型包括多个场景元素,所述场景图像为所述可移动平台响应于所述控制信号运动后所观测到的所述场景元素的场景图像,场景元素在不同的场景图像中的位置偏差根据可移动平台的视觉传感器之间的相对位姿确定,从而能够通过具有视差的多个场景图像模拟控制系统的实际图像输入,实现对基于立体视觉的控制系统的仿真测试,经济、高效地测试验证控制系统的基本功能,解决了依赖大量道 路测试带来的不可靠性、场景测试类别不全及高投入等实际问题,有效提高了测试效率,降低了测试成本。The simulation testing method provided in this embodiment can acquire the control signal output by the control system to be tested, where the control signal is output by the control system according to the historically input scene image and the preset control model for controlling the movable platform and output at least two scene images according to the control signal and the virtual scene model, wherein the scene model includes a plurality of scene elements, and the scene images are the movable platform in response to the control The scene images of the scene elements observed after the signal moves, the positional deviations of the scene elements in different scene images are determined according to the relative poses between the visual sensors of the movable platform, so that multiple scenes with parallax can be passed through. The image simulates the actual image input of the control system, realizes the simulation test of the control system based on stereo vision, tests and verifies the basic functions of the control system economically and efficiently, and solves the unreliability and incomplete test categories caused by relying on a large number of road tests. And practical problems such as high investment, effectively improve the test efficiency and reduce the test cost.
图4为本申请实施例提供的又一种仿真测试方法的流程示意图。本实施例是在上述实施例提供的技术方案的基础上,通过显示设备显示至少两个场景图像。如图4所示,所述方法可以包括:FIG. 4 is a schematic flowchart of another simulation testing method provided by an embodiment of the present application. In this embodiment, on the basis of the technical solutions provided by the above-mentioned embodiments, at least two scene images are displayed through a display device. As shown in Figure 4, the method may include:
步骤401、获取待测试的控制系统输出的控制信号,所述控制信号为所述控制系统根据历史输入的场景图像和预设的控制模型输出的用于控制可移动平台的控制信号。Step 401: Obtain a control signal output by a control system to be tested, where the control signal is a control signal output by the control system according to historically input scene images and a preset control model for controlling the movable platform.
本实施例中步骤401的具体实现原理和过程可以参见前述实施例,此处不再赘述。For the specific implementation principle and process of step 401 in this embodiment, reference may be made to the foregoing embodiments, and details are not repeated here.
步骤402、根据所述控制信号以及虚拟的场景模型,确定至少两个场景图像。Step 402: Determine at least two scene images according to the control signal and the virtual scene model.
本实施例中,通过步骤402至步骤403可以实现根据所述控制信号以及虚拟的场景模型,输出至少两个场景图像。具体的,可以先根据所述控制信号以及虚拟的场景模型,确定至少两个场景图像,再输出所述至少两个场景图像。In this embodiment, at least two scene images can be output according to the control signal and the virtual scene model through steps 402 to 403 . Specifically, at least two scene images may be determined according to the control signal and the virtual scene model, and then the at least two scene images are output.
可选的,根据所述控制信号以及虚拟的场景模型,输出至少两个场景图像,可以包括:根据所述控制信号、虚拟的场景模型以及立体视觉参数,确定至少两个场景图像;输出所述至少两个场景图像。Optionally, outputting at least two scene images according to the control signal and the virtual scene model may include: determining at least two scene images according to the control signal, the virtual scene model, and stereoscopic parameters; outputting the At least two scene images.
其中,所述立体视觉参数可以是用于为场景图像赋予视差的任意参数。可选的,所述立体视觉参数由所述可移动平台的视觉传感器的安装位姿和/或所述可移动平台的视觉传感器之间的相对位姿确定。Wherein, the stereo vision parameter may be any parameter used to impart parallax to the scene image. Optionally, the stereo vision parameter is determined by the installation pose of the vision sensor of the movable platform and/or the relative pose between the vision sensors of the movable platform.
可选的,所述安装位姿可以包括安装位置和/或安装角度等,所述相对位姿可以包括视觉传感器相互之间的相对位置和/或角度等。通过视觉传感器的安装位姿及相对位姿确定立体视觉参数,可以更加准确地确定场景图像之间的视差,提高仿真测试的准确性。Optionally, the installation posture may include an installation position and/or an installation angle, etc., and the relative posture may include a relative position and/or angle of the visual sensors to each other, and the like. By determining the stereo vision parameters by the installation pose and relative pose of the vision sensor, the parallax between the scene images can be determined more accurately, and the accuracy of the simulation test can be improved.
具体的,所述立体视觉参数可以包括视觉传感器之间的基线(Baseline)参数等。所述基线参数用于表征视觉传感器的中心间距,输出的场景图像之间的视差大小通过基线参数设定,能够快速、准确地生成场景图像。Specifically, the stereo vision parameters may include baseline (Baseline) parameters between vision sensors, and the like. The baseline parameter is used to represent the center distance of the visual sensor, and the parallax size between the output scene images is set by the baseline parameter, so that the scene image can be generated quickly and accurately.
可选的,根据所述控制信号、虚拟的场景模型以及立体视觉参数,确定至少两个场景图像,可以包括:根据所述控制信号和虚拟的场景模型,确定所述可移动平台与所述场景元素的相对位姿;根据所述可移动平台与所述场景元素的相对位姿以及立体视觉参数,确定至少两个场景图像。Optionally, determining at least two scene images according to the control signal, the virtual scene model and the stereo vision parameters may include: determining the movable platform and the scene according to the control signal and the virtual scene model. relative pose of the element; at least two scene images are determined according to the relative pose of the movable platform and the scene element and stereo vision parameters.
举例来说,所述可移动平台为车辆,当驾驶控制系统输出驾驶控制信号后,车辆被视为在虚拟的场景模型中移动,相当于车辆相对于场景元素如树木等的位置发生了变化。根据车辆与场景元素的相对位姿和立体视觉参数,可以确定待输出的至少两个场景图像。For example, the movable platform is a vehicle. After the driving control system outputs a driving control signal, the vehicle is regarded as moving in the virtual scene model, which is equivalent to a change in the position of the vehicle relative to scene elements such as trees. At least two scene images to be output can be determined according to the relative pose and stereo vision parameters of the vehicle and the scene elements.
其中,根据所述相对位姿和立体视觉参数确定场景图像的具体方法可以通过仿真实验来实现,本申请实施例中不再赘述。通过控制信号和虚拟的场景模型确定可移动平台与场景元素的相对位姿,并根据所述相对位姿以及立体视觉参数确定至少两个场景图像,能够准确模拟可移动平台在实际场景中观测到的图像,进一步提高测试准确性。Wherein, the specific method for determining the scene image according to the relative pose and the stereo vision parameter can be implemented through a simulation experiment, which is not repeated in this embodiment of the present application. The relative pose of the movable platform and the scene elements is determined by the control signal and the virtual scene model, and at least two scene images are determined according to the relative pose and stereo vision parameters, which can accurately simulate the observation of the movable platform in the actual scene. images to further improve the test accuracy.
步骤403、将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像,使得所述控制系统根据至少两个视觉传感器拍摄到的图像和预设的控制模型确定对应的控制信号。Step 403: Send the at least two scene images to a display device, so as to display the at least two scene images through the display device, so that the control system is based on the images captured by the at least two visual sensors and the preset The control model determines the corresponding control signal.
本实施例中,通过步骤403可以实现输出至少两个场景图像。In this embodiment, at least two scene images can be output through step 403 .
其中,所述视觉传感器可以为相机、摄像头等能够拍摄图像的设备。所述视觉传感器的数量和输出的场景图像的数量相同;所述至少两个视觉传感器与所述至少两个场景图像一一对应,所述视觉传感器用于对对应的场景图像进行拍摄;所述至少两个视觉传感器之间的相对位姿由所述可移动平台的视觉传感器之间的相对位姿确定。The visual sensor may be a device capable of capturing images, such as a camera, a camera, or the like. The number of the visual sensors is the same as the number of output scene images; the at least two visual sensors are in one-to-one correspondence with the at least two scene images, and the visual sensors are used to photograph the corresponding scene images; the The relative pose between the at least two vision sensors is determined by the relative pose between the vision sensors of the movable platform.
具体的,所述至少两个视觉传感器与所述至少两个场景图像一一对应,可以是指,存在n个视觉传感器和n个场景图像,其中第i个视觉传感器与第i个场景图像对应,即,第i个视觉传感器用于拍摄显示设备显示的第i个场景图像,其中i的取值为1至n,n为≥2的正整数。Specifically, the at least two visual sensors are in a one-to-one correspondence with the at least two scene images, which may mean that there are n visual sensors and n scene images, wherein the ith visual sensor corresponds to the ith scene image , that is, the ith visual sensor is used to capture the ith scene image displayed by the display device, where i takes a value from 1 to n, and n is a positive integer ≥ 2.
可选的,在仿真测试时使用的所述至少两个视觉传感器可以用于模拟车辆在实际应用中所使用的视觉传感器。所述至少两个视觉传感器之间的相对位姿可以由所述实际应用中的视觉传感器之间的相对位姿确定。由于所述至少两个视觉传感器用于模拟实际车辆的视觉传感器,因此,在测试时使用的 所述至少两个视觉传感器可以根据实际车辆的视觉传感器来进行配置,例如,可以按照实际车辆的视觉传感器的数量、分辨率、位置、拍摄角度等来确定测试时的所述至少两个视觉传感器的数量、分辨率、位置、拍摄角度等,使得测试时的视觉传感器与实际使用时的视觉传感器保持一致,提高测试的准确性。Optionally, the at least two visual sensors used in the simulation test can be used to simulate the visual sensors used by the vehicle in practical applications. The relative pose between the at least two vision sensors may be determined from the relative pose between the visual sensors in the practical application. Since the at least two vision sensors are used to simulate the vision sensors of the actual vehicle, the at least two vision sensors used in the test may be configured according to the vision sensors of the actual vehicle, for example, may be configured according to the vision sensors of the actual vehicle The number, resolution, position, shooting angle, etc. of the sensors are used to determine the number, resolution, position, shooting angle, etc. of the at least two visual sensors during the test, so that the visual sensor during testing is kept with the visual sensor during actual use. Consistent, improving the accuracy of the test.
本实施例提供的仿真测试方法,可以实现对控制系统和视觉传感器进行测试。仿真器可以通过显示设备将场景图像播放给视觉传感器“观看”,控制系统根据视觉传感器采集到的图像进行处理,输出对应的控制信号,这种测试方法可以实现硬件在环实验,完整地测试了整套系统的硬件及软件部分,填补了立体视觉硬件在环仿真测试的实验空缺。The simulation test method provided in this embodiment can realize the test of the control system and the visual sensor. The simulator can play the scene image to the visual sensor for "viewing" through the display device, and the control system processes the image collected by the visual sensor and outputs the corresponding control signal. This test method can realize the hardware-in-the-loop experiment and completely test the The hardware and software parts of the whole system fill the experimental gap of the stereo vision hardware-in-the-loop simulation test.
在上述实施例提供的技术方案的基础上,可选的是,所述显示设备的数量可以为一个也可以为多个,例如,可以设置多个显示设备,分别显示对应的场景图像,或者,也可以设置一个显示设备,在显示设备的不同显示区域上显示各个场景图像。On the basis of the technical solutions provided in the foregoing embodiments, optionally, the number of the display devices may be one or more, for example, multiple display devices may be set to display corresponding scene images respectively, or, It is also possible to set a display device to display each scene image on different display areas of the display device.
可选的,所述显示设备的数量与所述视觉传感器的数量相同,所述至少两个显示设备与所述至少两个视觉传感器一一对应,每一视觉传感器用于对对应的显示设备显示的画面进行拍摄。相应的,将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像,可以包括:将每一场景图像发送给与其对应的显示设备进行显示,以通过至少两个所述显示设备显示所述至少两个场景图像。Optionally, the number of the display devices is the same as the number of the visual sensors, the at least two display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used to display the corresponding display device. screen to shoot. Correspondingly, sending the at least two scene images to a display device to display the at least two scene images through the display device may include: sending each scene image to its corresponding display device for display, so as to display the at least two scene images through the display device. The at least two scene images are displayed by at least two of the display devices.
其中,所述至少两个显示设备与所述至少两个视觉传感器一一对应,可以是指,存在n个视觉传感器和n个显示设备,其中第i个视觉传感器与第i个显示设备对应,即,第i个视觉传感器用于拍摄第i个显示设备的画面,其中i的取值为1至n,n为≥2的正整数。Wherein, the at least two display devices are in one-to-one correspondence with the at least two visual sensors, which may mean that there are n visual sensors and n display devices, wherein the ith visual sensor corresponds to the ith display device, That is, the i-th visual sensor is used to capture a picture of the i-th display device, where i takes a value from 1 to n, and n is a positive integer ≥ 2.
图5为本申请实施例提供的一种基于多个显示设备的测试架构示意图。如图5所示,控制系统以及与之连接的视觉传感器构成被测试系统,视觉传感器有两个,分别记为左相机和右相机,显示设备也可以有两个,分别记为左侧显示设备和右侧显示设备,相应的,输出的场景图像也有两个,分别记为左侧图像和右侧图像。FIG. 5 is a schematic diagram of a test architecture based on multiple display devices according to an embodiment of the present application. As shown in Figure 5, the control system and the visual sensor connected to it constitute the tested system. There are two visual sensors, which are marked as the left camera and the right camera respectively. There can also be two display devices, which are marked as the left display device respectively. and the display device on the right, correspondingly, there are also two output scene images, which are respectively recorded as the left image and the right image.
在仿真测试过程中,仿真器生成具有视差的左侧图像和右侧图像后,可以将左侧图像发送给左侧显示设备,将右侧图像发送给右侧显示设备,两个显示设备分别同步播放左右侧的视差图像,左相机和右相机分别拍摄左侧显示设备和右侧显示设备显示的画面,从而通过不同的相机可以采集不同的图像输入到控制系统进行处理,实现了软件加硬件的测试。During the simulation test, after the emulator generates the left image and the right image with parallax, the left image can be sent to the left display device, and the right image can be sent to the right display device, and the two display devices can be synchronized respectively. Play the parallax images on the left and right sides. The left camera and the right camera shoot the pictures displayed by the left display device and the right display device respectively, so that different images can be collected by different cameras and input to the control system for processing, realizing the combination of software and hardware. test.
需要说明的是,单目系统通过前后帧的变化,可以感知环境的变化,而双目系统获取到的图像具有视差,通过单一时刻的两个双目图像,就可以推测出目标物体的深度信息,本实施例正是基于这一原理,用两个显示设备分别播放场景图像形成的视频流,并通过两个相机分别采集每个显示设备播放的视频流传递到控制系统,实现了模拟环境下的立体成像实验,从而在室内条件下可以生成立体仿真视频流给到被测试的系统,实现和真实场景一样的测试效果。It should be noted that the monocular system can perceive the change of the environment through the change of the frame before and after, while the image obtained by the binocular system has parallax. Through the two binocular images at a single moment, the depth information of the target object can be inferred. , this embodiment is based on this principle, using two display devices to play the video stream formed by the scene image respectively, and using two cameras to capture the video stream played by each display device and transmit it to the control system, realizing the simulation environment. The stereo imaging experiment, so that the stereo simulation video stream can be generated to the tested system under indoor conditions to achieve the same test effect as the real scene.
通过设置多个显示设备,且显示设备、视觉传感器和场景图像均具有对应关系,一个场景图像通过对应的显示设备进行显示,并由对应的视觉传感器进行拍摄,使得各个场景图像互不干扰,提高了图像输出的准确性。By setting up multiple display devices, and the display devices, visual sensors and scene images all have a corresponding relationship, a scene image is displayed by the corresponding display device and photographed by the corresponding visual sensor, so that each scene image does not interfere with each other, improving the the accuracy of the image output.
可选的,还可以通过相机标定法对所述视觉传感器进行标定,以确定所述视觉传感器的标定参数,并根据标定参数确定输出的场景图像。所述相机标定法可以包括棋盘法等标定方法。Optionally, the vision sensor may also be calibrated by a camera calibration method to determine the calibration parameters of the vision sensor, and determine the output scene image according to the calibration parameters. The camera calibration method may include calibration methods such as the checkerboard method.
图6为本申请实施例提供的一种标定图案的示意图。在对视觉传感器进行标定时,可以使用图6所示的图案。具体的,两个显示设备可以分别播放所述标定图案,由对应的视觉传感器采集标定图像,利用标定算法计算图案的旋转和/或平移参数,记为视觉传感器的标定参数,将所述参数反馈给仿真器,仿真器使其输出的图像做反向旋转和/或平移,使得两个视觉传感器采集到的图像能够对齐,实现立体视觉算法的良好深度匹配。FIG. 6 is a schematic diagram of a calibration pattern provided by an embodiment of the present application. When calibrating vision sensors, the pattern shown in Figure 6 can be used. Specifically, the two display devices can play the calibration pattern respectively, collect the calibration image by the corresponding vision sensor, use the calibration algorithm to calculate the rotation and/or translation parameters of the pattern, mark it as the calibration parameter of the vision sensor, and feed the parameter back The simulator is provided with reverse rotation and/or translation of the output image by the simulator, so that the images collected by the two vision sensors can be aligned, so as to achieve good depth matching of the stereo vision algorithm.
举例来说,在对视觉传感器进行标定后,确定左边图像相对于右边图像需要顺时针旋转5°,那么后续测试过程中,左边显示设备显示的每张图像都可以顺时针旋转5°,以保证左右图像对齐。For example, after calibrating the vision sensor, it is determined that the left image needs to be rotated 5° clockwise relative to the right image, then in the subsequent testing process, each image displayed by the left display device can be rotated 5° clockwise to ensure Left and right images are aligned.
一个示例中,在将每一场景图像发送给与其对应的显示设备进行显示之前,可以根据所述视觉传感器对应的标定参数,对所述至少两个场景图像中的至少部分图像进行图像变换。In one example, before each scene image is sent to its corresponding display device for display, image transformation may be performed on at least part of the at least two scene images according to calibration parameters corresponding to the visual sensor.
具体的,仿真器可以先得到具有视差的至少两个场景图像,然后再根据标定参数,对所述至少两个场景图像中的至少部分图像进行图像转换,例如,进行旋转和/或平移,最终得到对齐的图像。Specifically, the simulator may first obtain at least two scene images with parallax, and then perform image transformation on at least part of the at least two scene images according to the calibration parameters, for example, perform rotation and/or translation, and finally Get aligned images.
可选的,图像变换可以通过仿真器实现,也可以在仿真器后增加其它模块来实现。Optionally, the image transformation can be implemented through an emulator, or can be implemented by adding other modules after the emulator.
在标定参数发生变化时,直接根据变化后的标定参数调整所述至少两个场景图像即可,方案灵活,易于实现。When the calibration parameters change, the at least two scene images can be directly adjusted according to the changed calibration parameters, and the solution is flexible and easy to implement.
另一示例中,根据所述控制信号以及虚拟的场景模型,输出至少两个场景图像,可以包括:根据所述控制信号、虚拟的场景模型、立体视觉参数以及所述视觉传感器对应的标定参数,确定至少两个场景图像;输出所述至少两个场景图像。In another example, outputting at least two scene images according to the control signal and the virtual scene model may include: according to the control signal, the virtual scene model, stereo vision parameters, and calibration parameters corresponding to the vision sensor, determining at least two scene images; outputting the at least two scene images.
具体的,仿真器可以根据控制信号、虚拟的场景模型、立体视觉参数以及所述视觉传感器对应的标定参数,直接得到具有视差且对齐的至少两个场景图像,从而能够快速确定最终输出的场景图像,提高仿真测试的效率。Specifically, the simulator can directly obtain at least two scene images with parallax and aligned according to the control signal, the virtual scene model, the stereo vision parameters and the calibration parameters corresponding to the vision sensor, so that the final output scene image can be quickly determined , to improve the efficiency of simulation testing.
图7为本申请实施例提供的一种基于光学系统的测试架构示意图。图7所示方案是在图5所示方案的基础上,增加了光学系统。所述光学系统可以设置在所述显示设备与所述视觉传感器之间,所述光学系统用于对所述显示设备输出的图像进行光学转换,使得转换后的图像匹配所述视觉传感器的视场角(FOV)。FIG. 7 is a schematic diagram of a test architecture based on an optical system provided by an embodiment of the present application. The solution shown in FIG. 7 is based on the solution shown in FIG. 5 , and an optical system is added. The optical system may be disposed between the display device and the visual sensor, and the optical system is used to perform optical conversion on the image output by the display device, so that the converted image matches the field of view of the visual sensor angle (FOV).
其中,所述光学系统的数量可以为一个或多个,可以为每个显示设备配置一光学系统,也可以多个显示设备共用一光学系统。所述光学系统的结构可以根据实际需要来设计,只要能够满足图像经转换后匹配视觉传感器的视场角即可。Wherein, the number of the optical systems may be one or more, one optical system may be configured for each display device, or one optical system may be shared by multiple display devices. The structure of the optical system can be designed according to actual needs, as long as the image can be converted to match the field of view of the vision sensor.
由于通常车用视觉传感器的视场角会比较大,为了匹配视觉传感器的视场角,通过显示设备直接显示的方式通常会使整个测试系统非常庞大。因此,本实施例中可以引入光学系统,光学系统可以对显示设备输出的图像进行光学转换,放大输出的图像,从而匹配视觉传感器的视场角,能够有效缩小成像部分体积,无需使用庞大的显示设备,减少测试占地面积。Since the field of view of the vehicle vision sensor is usually relatively large, in order to match the field of view of the visual sensor, the direct display method through the display device usually makes the entire test system very large. Therefore, an optical system can be introduced in this embodiment, and the optical system can perform optical conversion on the image output by the display device and enlarge the output image, thereby matching the field of view of the vision sensor, which can effectively reduce the volume of the imaging part without using a huge display equipment, reducing the test footprint.
图8为本申请实施例提供的一种基于3D显示设备的测试架构示意图。在所述架构中,所述显示设备可以为3D显示设备。相应的,将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像,可以包括:将所述至少两个场景图像发送给所述3D显示设备,以使所述3D显示设备通过3D投影方式显示所述至少两个场景图像。FIG. 8 is a schematic diagram of a test architecture based on a 3D display device provided by an embodiment of the present application. In the architecture, the display device may be a 3D display device. Correspondingly, sending the at least two scene images to a display device to display the at least two scene images through the display device may include: sending the at least two scene images to the 3D display device, so that the 3D display device displays the at least two scene images by means of 3D projection.
在各种类型的光中,没有固定偏振方向的,即沿着各个方向振动的光波的强度都相同的,通常被称作自然光,例如太阳光就是一种最常见的自然光;有固定偏振方向的,通常被称作偏振光,显示设备的屏幕发出的光可以为偏振光。Among various types of light, there is no fixed polarization direction, that is, the intensity of light waves vibrating in all directions is the same, which is usually called natural light. For example, sunlight is one of the most common natural light; there is a fixed polarization direction. , often referred to as polarized light, the light emitted by the screen of a display device can be polarized light.
可选的,可以为不同的场景图像设置不同的偏振方向,从而形成3D图像。视觉传感器上可以设置有与显示设备发出的偏振光的偏振方向一致的偏振片,偏振片只允许特定偏振方向的光进入视觉传感器,从而能够准确采集到特定偏振方向的图像,而不受其他偏振方向的图像的影响。Optionally, different polarization directions can be set for different scene images, thereby forming 3D images. The vision sensor can be provided with a polarizer that is consistent with the polarization direction of the polarized light emitted by the display device. The polarizer only allows light with a specific polarization direction to enter the vision sensor, so that images with a specific polarization direction can be accurately collected without being affected by other polarizations. The effect of the orientation of the image.
可选的,两个场景图像的偏振方向可以正交。每个视觉传感器分别与对应的图像的偏振方向一致。Optionally, the polarization directions of the two scene images may be orthogonal. Each vision sensor is aligned with the polarization direction of the corresponding image, respectively.
例如,第一场景图像的偏振方向为方向1,第二场景图像的偏振方向为方向2,第一视觉传感器用于采集第一场景图像,第二视觉传感器用于采集第二场景图像,则第一视觉传感器的偏振方向可以为方向1,第二视觉传感器的偏振方向可以为方向2。For example, the polarization direction of the first scene image is direction 1, the polarization direction of the second scene image is direction 2, the first vision sensor is used to collect the first scene image, and the second vision sensor is used to collect the second scene image, then the first The polarization direction of one vision sensor may be direction 1, and the polarization direction of the second vision sensor may be direction 2.
如图8所示,控制系统以及与之连接的视觉传感器构成被测试系统,视觉传感器有两个,分别记为左相机和右相机,显示设备可以只有一个,为3D显示设备,所述3D显示设备能够显示3D图像,这样,可以通过一个3D显示设备显示两个场景图像,供左相机和右相机分别采集。As shown in Figure 8, the control system and the visual sensor connected to it constitute the tested system. There are two visual sensors, which are marked as the left camera and the right camera respectively, and there can be only one display device, which is a 3D display device. The device is capable of displaying 3D images, so that two scene images can be displayed by one 3D display device for the left camera and the right camera to capture respectively.
在这种实现方案中,通过3D投影的方式,可以将至少两个场景图像以3D技术播放出来,被测试的两个相机可以分别装上正交的偏振片,类似于看3D影片,使得两个相机可以同时观看一个画面,但是由于偏振方向不同,两个相机采集到的图像不同,同样能实现控制系统的后端立体复原。In this implementation scheme, through 3D projection, at least two scene images can be played in 3D technology, and the two cameras to be tested can be equipped with orthogonal polarizers, similar to watching 3D movies, so that the two cameras can be Each camera can watch a picture at the same time, but due to the different polarization directions, the images collected by the two cameras are different, and the back-end stereo restoration of the control system can also be realized.
通过将至少两个场景图像发送给所述3D显示设备,以使所述3D设备通过3D投影方式显示所述至少两个场景图像,能够在同一显示设备上显示多个场景图像,且各个场景图像之间互不干扰,从而在保证测试准确性的基础 上减少设备数量,减少测试场地面积,降低测试成本。By sending at least two scene images to the 3D display device, so that the 3D device displays the at least two scene images in a 3D projection manner, multiple scene images can be displayed on the same display device, and each scene image They do not interfere with each other, thereby reducing the number of equipment, reducing the area of the test site and reducing the test cost on the basis of ensuring the test accuracy.
图9为本申请实施例提供的一种基于图像输出接口的测试架构示意图。在图9所示方案中,输出至少两个场景图像,可以包括:将所述至少两个场景图像通过图像输出接口发送给所述控制系统。FIG. 9 is a schematic diagram of a test architecture based on an image output interface provided by an embodiment of the present application. In the solution shown in FIG. 9 , outputting at least two scene images may include: sending the at least two scene images to the control system through an image output interface.
其中,所述图像输出接口可以是和仿真器集成在一起的,也可以是和仿真器分离设置的,或者,所述图像输出接口也可以是和控制系统集成到一起的。Wherein, the image output interface may be integrated with the simulator, or may be provided separately from the simulator, or the image output interface may also be integrated with the control system.
如图9所示,仿真器可以在获取到控制系统输出的控制信号后,可以根据控制信号以及虚拟的场景模型确定至少两个场景图像,然后,将所述至少两个场景图像通过图像输出接口发送给所述控制系统,所述控制系统可以继续根据场景图像生成对应的控制信号并输出。As shown in FIG. 9 , after acquiring the control signal output by the control system, the simulator can determine at least two scene images according to the control signal and the virtual scene model, and then send the at least two scene images through the image output interface Send it to the control system, and the control system can continue to generate and output the corresponding control signal according to the scene image.
在这种实现方式中,可以舍弃视觉传感器的实际成像输入,直接通过图像输出接口,以数字信号的形式将至少两个场景图像分别输入到控制系统,同样可以在控制系统实现后端的立体复原,这种方式属于伪硬件在环实现,视觉传感器的光学成像部分没有被测试到,但是控制系统可以被测试到,从而实现系统软件的测试。In this implementation, the actual imaging input of the vision sensor can be discarded, and at least two scene images can be input to the control system in the form of digital signals directly through the image output interface. This method belongs to the pseudo hardware-in-the-loop implementation. The optical imaging part of the vision sensor has not been tested, but the control system can be tested, thereby realizing the testing of the system software.
通过上述实现方式,可以单独对控制系统进行测试,以进一步减少仿真测试所使用的设备,提高仿真测试的灵活性。一般来说,可以先在前期单独测试控制系统,控制系统测试通过后,可以加入视觉传感器,进一步进行软硬件测试,这样,通过一个仿真器可以实现两种测试,提升测试效率。Through the above implementation manner, the control system can be tested independently, so as to further reduce the equipment used in the simulation test and improve the flexibility of the simulation test. Generally speaking, the control system can be tested separately in the early stage. After the control system has passed the test, a visual sensor can be added to further test the software and hardware. In this way, two tests can be implemented through a single simulator, which improves the test efficiency.
可选的,将所述至少两个场景图像通过图像输出接口发送给所述控制系统,可以包括:将所述至少两个场景图像转换为预设格式的至少两个场景图像;将所述预设格式的至少两个场景图像通过图像输出接口发送给所述控制系统。Optionally, sending the at least two scene images to the control system through an image output interface may include: converting the at least two scene images into at least two scene images in a preset format; The formatted at least two scene images are sent to the control system through an image output interface.
其中,所述预设格式可以为实际使用时的视觉传感器输出的格式。例如,在实际使用时,视觉传感器向控制系统输出的格式为USB格式。那么,在测试过程中,仿真器可以将根据控制信号和虚拟的场景模型确定的HDMI格式的场景图像转换为USB格式并通过图像输出接口发送给控制系统,从而使测试环境更加贴近实际使用场景,满足控制系统的输入需求,使得测试能够顺 利进行。Wherein, the preset format may be the format output by the vision sensor in actual use. For example, in actual use, the output format of the vision sensor to the control system is USB format. Then, during the test, the emulator can convert the scene image in HDMI format determined according to the control signal and the virtual scene model into USB format and send it to the control system through the image output interface, so that the test environment is closer to the actual use scene. Meet the input requirements of the control system, so that the test can be carried out smoothly.
图10为本申请实施例提供的一种基于仿真器直连的测试架构示意图。在图10所示方案中,输出至少两个场景图像,可以包括:将所述至少两个场景图像直接发送给所述控制系统。FIG. 10 is a schematic diagram of a test architecture based on a direct connection of an emulator provided by an embodiment of the present application. In the solution shown in FIG. 10 , outputting at least two scene images may include: directly sending the at least two scene images to the control system.
如图10所示,仿真器可以直接与控制系统连接,将生成的至少两个场景图像直接发送给所述控制系统。As shown in FIG. 10 , the simulator can be directly connected with the control system, and the generated at least two scene images are directly sent to the control system.
对于控制系统来说,无论是接视觉传感器,还是直接接到仿真器,接口和协议都可以是一样,控制系统可以不用关注获取到的图像是真实拍摄的图像还是直接接收的图像数据。在图10所示实现方式中,仿真器输出的图像可以不经过其它模块,而是直接给到控制系统,架构简单,易于实现,进一步降低了测试的成本。For the control system, whether it is connected to a visual sensor or directly to an emulator, the interface and protocol can be the same, and the control system does not need to pay attention to whether the acquired image is a real captured image or directly received image data. In the implementation shown in FIG. 10 , the image output by the emulator can be directly sent to the control system without passing through other modules. The structure is simple and easy to implement, which further reduces the cost of testing.
上述实施例提供了立体仿真测试的多种实现方法,需要说明的是,在此基础上可以扩展更多种实现方法,只要保证控制系统获取到的图像是具有视差的图像即可。The above embodiment provides multiple implementation methods for stereo simulation testing. It should be noted that more implementation methods can be extended on this basis, as long as it is ensured that the image obtained by the control system is an image with parallax.
可选的,所述仿真器确定的场景图像为单一场景图像,在仿真器后可以设置图像输出接口,所述图像输出接口可以根据所述单一场景图像输出所述至少两个场景图像,从而有效减轻仿真器的负担。Optionally, the scene image determined by the simulator is a single scene image, and an image output interface can be set after the simulator, and the image output interface can output the at least two scene images according to the single scene image, thereby effectively Reduce the burden on the emulator.
其中,所述单一场景图像可以是所述可移动平台的任一位置可观测到的场景图像,例如,其中一个视觉传感器可观测到的场景图像,或者两个视觉传感器之间的中点可观测到的场景图像。Wherein, the single scene image may be an observable scene image at any position of the movable platform, for example, a scene image observable by one of the vision sensors, or observable at a midpoint between two vision sensors to the scene image.
所述图像输出接口可以存储有立体视觉参数,用以实现将单一场景图像转换为具有视差的多个场景图像。此外,所述图像输出接口还可以存储有所述虚拟的场景模型中各个场景元素的信息,从而更加准确地还原各个视觉传感器可观测到的场景图像。The image output interface may store stereo vision parameters to convert a single scene image into multiple scene images with parallax. In addition, the image output interface may also store the information of each scene element in the virtual scene model, so as to restore the scene image observable by each visual sensor more accurately.
可选的,也可以在视觉传感器后增加模块为场景图像增加视差。例如,至少两个视觉传感器接收到的场景图像是一样的,在视觉传感器后加设转换模块,通过所述转换模块对所述场景图像进行转换,得到具有视差的至少两个场景图像并输出到控制系统。Optionally, a module can also be added behind the vision sensor to add parallax to the scene image. For example, if the scene images received by at least two visual sensors are the same, a conversion module is added after the visual sensors, and the scene images are converted by the conversion module to obtain at least two scene images with parallax and output them to Control System.
综上所述,在本申请实施例中,输入到控制系统的至少两个场景图像是 具有视差的,但是,视差具体是在哪个环节得到的,本申请实施例不作限制。To sum up, in the embodiment of the present application, at least two scene images input to the control system have parallax, but the specific link in which the parallax is obtained is not limited in the embodiment of the present application.
在上述各实施例提供的技术方案的基础上,可选的是,所述方法还包括:根据所述控制信号以及虚拟的场景模型,确定对应的传感信息;输出所述传感信息,以使所述控制系统根据所述至少两个场景图像以及所述传感信息确定对应的控制信号。Based on the technical solutions provided by the above embodiments, optionally, the method further includes: determining corresponding sensing information according to the control signal and the virtual scene model; outputting the sensing information to The control system is made to determine a corresponding control signal according to the at least two scene images and the sensing information.
在实际使用时,可以在可移动平台上设置多种类型的传感器,并不局限于视觉传感器。相应的,在仿真测试时,通过设计多种传感信息输入到控制系统,可以有效测试控制系统在不同状态下的运行情况,满足不同应用场景的需求。In actual use, various types of sensors can be provided on the movable platform, and are not limited to vision sensors. Correspondingly, during the simulation test, by designing a variety of sensor information to input into the control system, the operation of the control system in different states can be effectively tested to meet the needs of different application scenarios.
其中,所述传感信息可以为任意类型的传感信息,包括但不限于:风速、温度、湿度、天气等。The sensing information may be any type of sensing information, including but not limited to: wind speed, temperature, humidity, weather, and the like.
可选的,所述传感信息可以包括点云数据。在实际使用时,点云数据可以通过激光雷达检测得到,激光雷达可以用于辅助感知周围环境。Optionally, the sensing information may include point cloud data. In actual use, point cloud data can be detected by lidar, and lidar can be used to assist in perceiving the surrounding environment.
相应的,在仿真测试时,根据所述控制信号以及虚拟的场景模型,确定对应的传感信息,可以包括:根据所述控制信号以及虚拟的场景模型,确定所述可移动平台与所述场景元素之间的相对位姿;根据所述可移动平台与所述场景元素之间的相对位姿确定点云数据。Correspondingly, during the simulation test, determining the corresponding sensing information according to the control signal and the virtual scene model may include: determining the movable platform and the scene according to the control signal and the virtual scene model. relative poses between elements; point cloud data is determined according to the relative poses between the movable platform and the scene elements.
具体的,在仿真测试时,仿真器除了输出具有视差的至少两个场景图像以外,还可以输出虚拟的场景模型中场景元素对应的点云数据,所述点云数据用于模拟激光雷达采集到的点云数据,可以描述场景元素的深度信息,使得控制系统可以根据所述至少两个场景图像以及所述点云数据确定对应的控制信号,不仅可以测试控制系统对场景图像的响应,还可以测试控制系统对点云数据的响应,有效提高测试的维度,提升测试效果。Specifically, during the simulation test, in addition to outputting at least two scene images with parallax, the simulator can also output point cloud data corresponding to the scene elements in the virtual scene model. The point cloud data can describe the depth information of scene elements, so that the control system can determine the corresponding control signal according to the at least two scene images and the point cloud data, which can not only test the response of the control system to the scene images, but also The response of the test control system to the point cloud data can effectively improve the dimension of the test and improve the test effect.
在上述各实施例提供的技术方案的基础上,可选的是,所述方法还包括:根据所述控制信号,确定所述可移动平台的运行状态;根据所述可移动平台的运行状态,对所述控制系统进行评价。On the basis of the technical solutions provided in the foregoing embodiments, optionally, the method further includes: determining the operating state of the movable platform according to the control signal; and, according to the operating state of the movable platform, The control system was evaluated.
以所述控制系统为应用于车辆的驾驶控制系统,所述可移动平台为车辆为例,仿真器可以通过控制信号和虚拟的场景模型,确定要显示的至少两个 场景图像或传感信息,至少两个场景图像或传感信息是对真实环境数据的模拟,驾驶控制系统可以对至少两个场景图像或传感信息进行识别,根据识别到的信息和预设的驾驶控制策略,得到对应的决策即控制信号,比如检测到斑马线则输出减速信号。Taking the control system as a driving control system applied to a vehicle and the movable platform as a vehicle as an example, the simulator can determine at least two scene images or sensor information to be displayed through the control signal and the virtual scene model, At least two scene images or sensing information are simulations of real environment data, and the driving control system can identify at least two scene images or sensing information, and obtain corresponding driving control strategies according to the identified information and preset driving control strategies. Decisions are control signals. For example, if a zebra crossing is detected, a deceleration signal is output.
在驾驶控制系统输出控制信号后,仿真器会根据控制信号,确定按照所述控制信号行驶的车辆的运行状态,例如所述车辆在车道中的位置、与周围障碍物的距离等等。After the driving control system outputs a control signal, the simulator will determine, according to the control signal, the running state of the vehicle driving according to the control signal, such as the position of the vehicle in the lane, the distance to surrounding obstacles, and so on.
然后,可以根据所述车辆的运行状态,对所述控制系统进行评价,例如判断是否偏离行车道、是否闯红灯、是否与障碍物距离过近等等,并根据判断结果输出对驾驶控制系统的评价。所述评价可以为评分或者是否合格等。Then, the control system can be evaluated according to the running state of the vehicle, for example, whether to deviate from the driving lane, whether to run a red light, whether it is too close to an obstacle, etc., and output the evaluation of the driving control system according to the judgment result. . The evaluation can be a rating or whether it is qualified or not.
举例来说,若所述驾驶控制系统在仿真测试过程中,出现控制车辆闯红灯、与障碍物相撞或者其它不按交通规则行驶的行为或危险的行为,则可以认为所述驾驶控制系统是不合格的,需要重新进行算法优化。For example, if the driving control system controls the vehicle to run a red light, collide with an obstacle or other behaviors or dangerous behaviors that do not follow the traffic rules during the simulation test, it can be considered that the driving control system is not. If qualified, the algorithm needs to be re-optimized.
若所述驾驶控制系统在整个仿真测试过程中平稳、顺利地行驶,则可以认为所述控制系统是合格的。在仿真测试合格后,可以安排进行实际道路测试或者其它测试,在全部测试完成后,可以将所述驾驶控制系统投入使用。If the driving control system runs smoothly and smoothly throughout the simulation test process, the control system can be considered qualified. After passing the simulation test, an actual road test or other tests can be arranged, and after all the tests are completed, the driving control system can be put into use.
通过所述车辆的运行状态,能够有效实现对所述控制系统的评价,满足仿真测试的需求,提前解决或收敛大部分测试问题,降低后期道路测试的成本,提高测试效率。Through the running state of the vehicle, the evaluation of the control system can be effectively realized, the requirements of the simulation test can be met, most of the test problems can be solved or converged in advance, the cost of the later road test can be reduced, and the test efficiency can be improved.
图11A为本申请实施例提供的一种仿真测试装置的结构示意图。所述装置应用于仿真测试系统;所述仿真测试系统用于基于虚拟的场景模型测试可移动平台的控制系统,所述场景模型包括多个场景元素;所述控制系统用于,基于至少两个视觉传感器观测到的场景图像,输出用于控制所述可移动平台的控制信号。如图11A所示,所述装置可以包括:FIG. 11A is a schematic structural diagram of a simulation testing apparatus provided by an embodiment of the present application. The device is applied to a simulation test system; the simulation test system is used to test a control system of a movable platform based on a virtual scene model, and the scene model includes a plurality of scene elements; the control system is used to, based on at least two The scene image observed by the vision sensor outputs a control signal for controlling the movable platform. As shown in Figure 11A, the apparatus may include:
获取模块1101,用于获取所述控制系统输出的所述可移动平台的控制信号;an acquisition module 1101, configured to acquire the control signal of the movable platform output by the control system;
模拟模块1102,用于基于所述控制信号,模拟所述可移动平台在所述场景模型中运动,得到所述可移动平台与所述场景元素之间的相对位姿;A simulation module 1102, configured to simulate the movement of the movable platform in the scene model based on the control signal, to obtain the relative pose between the movable platform and the scene element;
生成模块1103,用于根据所述相对位姿生成多个场景图像,所述多个场 景图像包括所述可移动平台在所述场景模型中运动时,至少两个所述视觉传感器观测到的场景图像;A generating module 1103, configured to generate a plurality of scene images according to the relative poses, where the plurality of scene images include scenes observed by at least two of the visual sensors when the movable platform moves in the scene model image;
输出模块1104,用于输出所述多个场景图像,以使所述控制系统根据所述多个场景图像生成对应的控制信号。The output module 1104 is configured to output the plurality of scene images, so that the control system generates corresponding control signals according to the plurality of scene images.
在一种可能的实现方式中,所述生成模块1103在根据所述相对位姿生成多个场景图像时,具体用于:In a possible implementation manner, when the generating module 1103 generates multiple scene images according to the relative pose, it is specifically used for:
根据所述可移动平台与所述场景元素之间的相对位姿,确定所述可移动平台可观测到的场景元素的成像变化,并基于所述成像变化生成多个场景图像。According to the relative pose between the movable platform and the scene element, an imaging change of the scene element observable by the movable platform is determined, and a plurality of scene images are generated based on the imaging change.
在一种可能的实现方式中,所述生成模块1103在根据所述相对位姿生成多个场景图像时,具体用于:In a possible implementation manner, when the generating module 1103 generates multiple scene images according to the relative pose, it is specifically used for:
根据所述可移动平台与所述场景元素之间的相对位姿以及立体视觉参数,生成所述多个场景图像。The plurality of scene images are generated according to the relative pose and stereo vision parameters between the movable platform and the scene elements.
在一种可能的实现方式中,所述立体视觉参数由所述至少两个视觉传感器的安装位姿和/或所述至少两个视觉传感器之间的相对位姿确定。In a possible implementation manner, the stereo vision parameter is determined by the installation pose of the at least two vision sensors and/or the relative pose between the at least two vision sensors.
在一种可能的实现方式中,所述输出模块1104在输出所述多个场景图像时,具体用于:In a possible implementation manner, when the output module 1104 outputs the multiple scene images, it is specifically used for:
将所述多个场景图像通过图像输出接口发送给所述控制系统。Sending the plurality of scene images to the control system through an image output interface.
在一种可能的实现方式中,所述输出模块1104在将所述多个场景图像通过图像输出接口发送给所述控制系统时,具体用于:In a possible implementation manner, when the output module 1104 sends the plurality of scene images to the control system through an image output interface, it is specifically configured to:
将所述多个场景图像转换为预设格式的多个场景图像;converting the plurality of scene images into a plurality of scene images in a preset format;
将所述预设格式的多个场景图像通过图像输出接口发送给所述控制系统。Sending the multiple scene images in the preset format to the control system through an image output interface.
在一种可能的实现方式中,所述输出模块1104具体用于:In a possible implementation manner, the output module 1104 is specifically used for:
将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像,使得所述控制系统基于所述至少两个视觉传感器拍摄所述显示设备得到的图像输出对应的控制信号;Send the plurality of scene images to a display device to display the plurality of scene images through the display device, so that the control system outputs corresponding images based on the at least two visual sensors photographing the display device. control signal;
其中,所述视觉传感器的数量和显示的场景图像的数量相同;所述至少两个视觉传感器与所述多个场景图像一一对应,所述视觉传感器用于对对应的场景图像进行拍摄。The number of the visual sensors is the same as the number of displayed scene images; the at least two visual sensors are in one-to-one correspondence with the plurality of scene images, and the visual sensors are used to photograph the corresponding scene images.
在一种可能的实现方式中,所述显示设备的数量与所述视觉传感器的数 量相同,至少两个所述显示设备与所述至少两个视觉传感器一一对应,每一视觉传感器用于对对应的显示设备显示的画面进行拍摄;In a possible implementation manner, the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used for Shoot the picture displayed by the corresponding display device;
相应的,所述输出模块1104在将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像时,具体用于:Correspondingly, when the output module 1104 sends the plurality of scene images to a display device to display the plurality of scene images through the display device, the output module 1104 is specifically configured to:
将每一场景图像发送给与其对应的显示设备进行显示,以通过至少两个所述显示设备显示所述多个场景图像。Each scene image is sent to its corresponding display device for display, so as to display the plurality of scene images through at least two of the display devices.
在一种可能的实现方式中,所述输出模块1104在将每一场景图像发送给与其对应的显示设备进行显示之前,还用于:In a possible implementation manner, before sending each scene image to its corresponding display device for display, the output module 1104 is further configured to:
根据所述视觉传感器对应的标定参数,对所述多个场景图像中的至少部分图像进行图像转换。Image conversion is performed on at least part of the multiple scene images according to the calibration parameters corresponding to the visual sensor.
在一种可能的实现方式中,所述生成模块1103在根据所述相对位姿生成多个场景图像时,具体用于:In a possible implementation manner, when the generating module 1103 generates multiple scene images according to the relative pose, it is specifically used for:
根据所述可移动平台与所述场景元素之间的相对位姿、立体视觉参数以及所述视觉传感器对应的标定参数,生成多个场景图像。A plurality of scene images are generated according to the relative poses between the movable platform and the scene elements, stereo vision parameters, and calibration parameters corresponding to the vision sensor.
在一种可能的实现方式中,所述生成模块1103还用于:In a possible implementation manner, the generating module 1103 is further configured to:
通过相机标定法对所述视觉传感器进行标定,以确定所述视觉传感器的标定参数。The vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
在一种可能的实现方式中,所述显示设备为3D显示设备;In a possible implementation manner, the display device is a 3D display device;
所述输出模块1104在将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像时,具体用于:When the output module 1104 sends the plurality of scene images to a display device to display the plurality of scene images through the display device, the output module 1104 is specifically configured to:
将所述多个场景图像发送给所述3D显示设备,以使所述3D显示设备通过3D投影方式显示所述多个场景图像。Sending the plurality of scene images to the 3D display device, so that the 3D display device displays the plurality of scene images through 3D projection.
在一种可能的实现方式中,所述输出模块1104还用于:In a possible implementation manner, the output module 1104 is further configured to:
根据所述可移动平台与所述场景元素之间的相对位姿,确定对应的传感信息;Determine corresponding sensing information according to the relative pose between the movable platform and the scene element;
输出所述传感信息,以使所述控制系统根据所述多个场景图像以及所述传感信息确定对应的控制信号。The sensing information is output, so that the control system determines a corresponding control signal according to the plurality of scene images and the sensing information.
在一种可能的实现方式中,所述传感信息包括所述场景元素对应的点云数据。In a possible implementation manner, the sensing information includes point cloud data corresponding to the scene element.
在一种可能的实现方式中,所述输出模块1104还用于:In a possible implementation manner, the output module 1104 is further configured to:
根据所述控制信号,确定所述可移动平台的运行状态;determining the operating state of the movable platform according to the control signal;
根据所述可移动平台的运行状态,对所述控制系统进行评价。The control system is evaluated according to the operating state of the movable platform.
在一种可能的实现方式中,所述可移动平台为车辆,所述控制系统为应用于车辆的驾驶控制系统。In a possible implementation manner, the movable platform is a vehicle, and the control system is a driving control system applied to the vehicle.
本实施例提供的仿真测试装置,可以用于执行图2所示的仿真测试方法,其具体实现原理和效果均可以参见前述实施例,此处不再赘述。The simulation testing apparatus provided in this embodiment can be used to execute the simulation testing method shown in FIG. 2 , and the specific implementation principles and effects thereof can refer to the foregoing embodiments, which will not be repeated here.
图11B为本申请实施例提供的另一种仿真测试装置的结构示意图。如图11B所示,所述装置可以包括:FIG. 11B is a schematic structural diagram of another simulation testing apparatus provided by an embodiment of the present application. As shown in Figure 11B, the apparatus may include:
获取模块1111,用于获取待测试的控制系统输出的控制信号,所述控制信号为所述控制系统根据历史输入的场景图像和预设的控制模型输出的用于控制可移动平台的控制信号;The acquisition module 1111 is used to acquire the control signal output by the control system to be tested, the control signal is the control signal output by the control system according to the historically input scene image and the preset control model for controlling the movable platform;
输出模块1112,用于根据所述控制信号以及虚拟的场景模型,输出至少两个场景图像;an output module 1112, configured to output at least two scene images according to the control signal and the virtual scene model;
其中,所述虚拟的场景模型包括多个场景元素;Wherein, the virtual scene model includes a plurality of scene elements;
所述至少两个场景图像包括第一场景图像和第二场景图像,所述第一场景图像包括第一像素区域,所述第二场景图像包括第二像素区域,所述第一像素区域和所述第二像素区域描述同一所述场景元素;所述第一像素区域在所述第一场景图像中的位置,与所述第二像素区域在所述第二场景图像中的位置之间的位置偏差,根据所述可移动平台的视觉传感器之间的相对位姿确定;The at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image The deviation is determined according to the relative pose between the visual sensors of the movable platform;
输出的多个所述场景图像用于表征所述可移动平台响应于所述控制信号运动,与所述场景元素之间的相对位姿改变后,所述可移动平台的所述视觉传感器所观测到的所述场景元素的场景图像。The outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene elements is changed, observed by the vision sensor of the movable platform to the scene image of the scene element.
在一种可能的实现方式中,所述输出模块1112具体用于:In a possible implementation manner, the output module 1112 is specifically used for:
根据所述控制信号、虚拟的场景模型以及立体视觉参数,确定至少两个场景图像;determining at least two scene images according to the control signal, the virtual scene model and the stereo vision parameters;
输出所述至少两个场景图像。The at least two scene images are output.
在一种可能的实现方式中,所述输出模块1112具体用于:In a possible implementation manner, the output module 1112 is specifically used for:
根据所述控制信号和虚拟的场景模型,确定所述可移动平台与所述场景 元素的相对位姿;According to the control signal and the virtual scene model, determine the relative pose of the movable platform and the scene element;
根据所述可移动平台与所述场景元素的相对位姿以及立体视觉参数,确定至少两个场景图像。At least two scene images are determined according to the relative poses of the movable platform and the scene elements and stereo vision parameters.
在一种可能的实现方式中,所述立体视觉参数由所述可移动平台的视觉传感器的安装位姿和/或所述可移动平台的视觉传感器之间的相对位姿确定。In a possible implementation manner, the stereo vision parameter is determined by the installation pose of the vision sensor of the movable platform and/or the relative pose between the vision sensors of the movable platform.
在一种可能的实现方式中,所述输出模块1112在输出至少两个场景图像时,具体用于:In a possible implementation manner, when the output module 1112 outputs at least two scene images, it is specifically used for:
将所述至少两个场景图像通过图像输出接口发送给所述控制系统。The at least two scene images are sent to the control system through an image output interface.
在一种可能的实现方式中,所述输出模块1112将所述至少两个场景图像通过图像输出接口发送给所述控制系统时,具体用于:In a possible implementation manner, when the output module 1112 sends the at least two scene images to the control system through an image output interface, it is specifically used for:
将所述至少两个场景图像转换为预设格式的至少两个场景图像;converting the at least two scene images into at least two scene images in a preset format;
将所述预设格式的至少两个场景图像通过图像输出接口发送给所述控制系统。The at least two scene images in the preset format are sent to the control system through an image output interface.
在一种可能的实现方式中,所述输出模块1112在输出至少两个场景图像时,具体用于:In a possible implementation manner, when the output module 1112 outputs at least two scene images, it is specifically used for:
将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像,使得所述控制系统根据至少两个视觉传感器拍摄到的图像和预设的控制模型确定对应的控制信号;Send the at least two scene images to a display device, so that the at least two scene images are displayed by the display device, so that the control system determines according to the images captured by the at least two visual sensors and a preset control model Corresponding control signal;
其中,所述视觉传感器的数量和输出的场景图像的数量相同;所述至少两个视觉传感器与所述至少两个场景图像一一对应,所述视觉传感器用于对对应的场景图像进行拍摄。The number of the visual sensors is the same as the number of output scene images; the at least two visual sensors are in one-to-one correspondence with the at least two scene images, and the visual sensors are used to photograph the corresponding scene images.
在一种可能的实现方式中,所述显示设备的数量与所述视觉传感器的数量相同,至少两个所述显示设备与所述至少两个视觉传感器一一对应,每一视觉传感器用于对对应的显示设备显示的画面进行拍摄;In a possible implementation manner, the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used for Shoot the picture displayed by the corresponding display device;
相应的,所述输出模块1112在将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像时,具体用于:Correspondingly, when the output module 1112 sends the at least two scene images to a display device, so as to display the at least two scene images through the display device, the output module 1112 is specifically configured to:
将每一场景图像发送给与其对应的显示设备进行显示,以通过至少两个所述显示设备显示所述至少两个场景图像。Each scene image is sent to its corresponding display device for display, so as to display the at least two scene images through at least two of the display devices.
在一种可能的实现方式中,所述输出模块1112还用于:In a possible implementation manner, the output module 1112 is further configured to:
在将场景图像发送给与其对应的显示设备进行显示之前,根据所述视觉 传感器对应的标定参数,对所述至少两个场景图像中的至少部分图像进行图像转换。Before sending the scene image to the corresponding display device for display, image conversion is performed on at least part of the at least two scene images according to the calibration parameters corresponding to the visual sensor.
在一种可能的实现方式中,所述输出模块1112在根据所述控制信号以及虚拟的场景模型,输出至少两个场景图像时,具体用于:In a possible implementation manner, when outputting at least two scene images according to the control signal and the virtual scene model, the output module 1112 is specifically configured to:
根据所述控制信号、虚拟的场景模型、立体视觉参数以及所述视觉传感器对应的标定参数,确定至少两个场景图像;Determine at least two scene images according to the control signal, the virtual scene model, the stereo vision parameters, and the calibration parameters corresponding to the vision sensor;
输出所述至少两个场景图像。The at least two scene images are output.
在一种可能的实现方式中,所述输出模块1112还用于:In a possible implementation manner, the output module 1112 is further configured to:
通过相机标定法对所述视觉传感器进行标定,以确定所述视觉传感器的标定参数。The vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
在一种可能的实现方式中,所述显示设备为3D显示设备;In a possible implementation manner, the display device is a 3D display device;
所述输出模块1112在将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像时,具体用于:When the output module 1112 sends the at least two scene images to a display device, so as to display the at least two scene images through the display device, the output module 1112 is specifically configured to:
将所述至少两个场景图像发送给所述3D显示设备,以使所述3D显示设备通过3D投影方式显示所述至少两个场景图像。The at least two scene images are sent to the 3D display device, so that the 3D display device displays the at least two scene images by means of 3D projection.
在一种可能的实现方式中,所述输出模块1112还用于:In a possible implementation manner, the output module 1112 is further configured to:
根据所述控制信号以及虚拟的场景模型,确定对应的传感信息;Determine corresponding sensing information according to the control signal and the virtual scene model;
输出所述传感信息,以使所述控制系统根据所述至少两个场景图像以及所述传感信息确定对应的控制信号。The sensing information is output, so that the control system determines a corresponding control signal according to the at least two scene images and the sensing information.
在一种可能的实现方式中,所述传感信息包括所述场景元素对应的点云数据;In a possible implementation manner, the sensing information includes point cloud data corresponding to the scene element;
所述输出模块1112在根据所述控制信号以及虚拟的场景模型,确定对应的传感信息时,具体用于:When determining the corresponding sensing information according to the control signal and the virtual scene model, the output module 1112 is specifically used for:
根据所述控制信号以及虚拟的场景模型,确定所述可移动平台与所述场景元素之间的相对位姿;determining the relative pose between the movable platform and the scene element according to the control signal and the virtual scene model;
根据所述可移动平台与所述场景元素之间的相对位姿确定点云数据。Point cloud data is determined according to the relative pose between the movable platform and the scene element.
在一种可能的实现方式中,所述输出模块1112还用于:In a possible implementation manner, the output module 1112 is further configured to:
根据所述控制信号,确定所述可移动平台的运行状态;determining the operating state of the movable platform according to the control signal;
根据所述可移动平台的运行状态,对所述控制系统进行评价。The control system is evaluated according to the operating state of the movable platform.
在一种可能的实现方式中,所述控制系统为应用于车辆的驾驶控制系统, 所述可移动平台为车辆。In a possible implementation manner, the control system is a driving control system applied to a vehicle, and the movable platform is a vehicle.
本实施例提供的仿真测试装置,可以用于执行图3A至图10所示实施例中的仿真测试方法,其具体实现原理和效果均可以参见前述实施例,此处不再赘述。The simulation testing apparatus provided in this embodiment can be used to execute the simulation testing methods in the embodiments shown in FIG. 3A to FIG. 10 , and the specific implementation principles and effects can be found in the foregoing embodiments, which are not repeated here.
图12A为本申请实施例提供的一种仿真测试系统的结构示意图。所述仿真测试系统用于基于虚拟的场景模型测试可移动平台的控制系统,所述场景模型包括多个场景元素;所述控制系统用于,基于至少两个视觉传感器观测到的场景图像,输出用于控制所述可移动平台的控制信号。FIG. 12A is a schematic structural diagram of a simulation testing system provided by an embodiment of the present application. The simulation test system is used for testing the control system of the movable platform based on a virtual scene model, the scene model includes a plurality of scene elements; the control system is used for, based on the scene images observed by at least two visual sensors, output Control signals for controlling the movable platform.
如图12A所示,所述系统可以包括:仿真器1201和图像输出设备1202;As shown in FIG. 12A, the system may include: an emulator 1201 and an image output device 1202;
所述仿真器1201用于获取所述控制系统输出的所述可移动平台的控制信号,基于所述控制信号,模拟所述可移动平台在所述场景模型中运动,得到所述可移动平台与所述场景元素之间的相对位姿,并生成基于所述相对位姿可观测到的场景图像;The simulator 1201 is configured to obtain the control signal of the movable platform output by the control system, and based on the control signal, simulate the movement of the movable platform in the scene model, and obtain the difference between the movable platform and the movable platform. relative poses between the scene elements, and generate an observable scene image based on the relative poses;
所述图像输出设备1202用于获取仿真器1201生成的场景图像,并根据获取到的场景图像输出多个场景图像,所述多个场景图像包括所述可移动平台在所述场景模型中运动时至少两个所述视觉传感器观测到的场景图像,以使所述控制系统根据所述多个场景图像生成对应的控制信号。The image output device 1202 is configured to acquire the scene image generated by the simulator 1201, and output a plurality of scene images according to the acquired scene image, and the plurality of scene images include when the movable platform moves in the scene model. at least two scene images observed by the visual sensors, so that the control system generates corresponding control signals according to the plurality of scene images.
在一种可能的实现方式中,所述仿真器1201在生成基于所述相对位姿可观测到的场景图像时,具体用于:In a possible implementation manner, when the simulator 1201 generates an observable scene image based on the relative pose, it is specifically used for:
根据所述可移动平台与所述场景元素之间的相对位姿,生成所述多个场景图像。The plurality of scene images are generated based on relative poses between the movable platform and the scene elements.
在一种可能的实现方式中,所述仿真器1201在根据所述相对位姿生成多个场景图像时,具体用于:In a possible implementation manner, when the simulator 1201 generates multiple scene images according to the relative pose, it is specifically used for:
根据所述可移动平台与所述场景元素之间的相对位姿,确定所述可移动平台可观测到的场景元素的成像变化,并基于所述成像变化生成多个场景图像。According to the relative pose between the movable platform and the scene element, an imaging change of the scene element observable by the movable platform is determined, and a plurality of scene images are generated based on the imaging change.
在一种可能的实现方式中,所述仿真器1201在根据所述相对位姿生成多个场景图像时,具体用于:In a possible implementation manner, when the simulator 1201 generates multiple scene images according to the relative pose, it is specifically used for:
根据所述可移动平台与所述场景元素之间的相对位姿以及立体视觉参数, 生成所述多个场景图像。The plurality of scene images are generated according to the relative pose and stereo vision parameters between the movable platform and the scene elements.
在一种可能的实现方式中,所述立体视觉参数由所述至少两个视觉传感器的安装位姿和/或所述至少两个视觉传感器之间的相对位姿确定。In a possible implementation manner, the stereo vision parameter is determined by the installation pose of the at least two vision sensors and/or the relative pose between the at least two vision sensors.
在一种可能的实现方式中,所述图像输出设备1202包括图像输出接口,所述仿真器1201还用于:In a possible implementation manner, the image output device 1202 includes an image output interface, and the emulator 1201 is further used for:
将所述多个场景图像发送给所述图像输出接口,以使所述图像输出接口将所述多个场景图像发送给所述控制系统。Sending the plurality of scene images to the image output interface, so that the image output interface sends the plurality of scene images to the control system.
在一种可能的实现方式中,所述仿真器1201在将所述多个场景图像发送给所述图像输出接口时,具体用于:In a possible implementation manner, when the simulator 1201 sends the multiple scene images to the image output interface, it is specifically used for:
将所述多个场景图像转换为预设格式的多个场景图像;converting the plurality of scene images into a plurality of scene images in a preset format;
将所述预设格式的多个场景图像发送给所述图像输出接口。Sending the plurality of scene images in the preset format to the image output interface.
在一种可能的实现方式中,所述图像输出设备1202包括用于显示所述多个场景图像的显示设备;In a possible implementation manner, the image output device 1202 includes a display device for displaying the plurality of scene images;
所述仿真器1201还用于:将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像,使得所述控制系统根据至少两个视觉传感器拍摄所述显示设备得到的图像输出对应的控制信号;The simulator 1201 is further configured to: send the plurality of scene images to a display device, so as to display the plurality of scene images through the display device, so that the control system captures the display according to at least two visual sensors The control signal corresponding to the image output obtained by the device;
其中,所述视觉传感器的数量和显示的场景图像的数量相同;所述至少两个视觉传感器与所述多个场景图像一一对应,所述视觉传感器用于对对应的场景图像进行拍摄。The number of the visual sensors is the same as the number of displayed scene images; the at least two visual sensors are in one-to-one correspondence with the plurality of scene images, and the visual sensors are used to photograph the corresponding scene images.
在一种可能的实现方式中,所述显示设备的数量与所述视觉传感器的数量相同,至少两个所述显示设备与所述至少两个视觉传感器一一对应,每一视觉传感器用于对对应的显示设备显示的画面进行拍摄;In a possible implementation manner, the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used for Shoot the picture displayed by the corresponding display device;
相应的,所述仿真器1201在将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像时,具体用于:Correspondingly, when the simulator 1201 sends the plurality of scene images to a display device to display the plurality of scene images through the display device, it is specifically used for:
将每一场景图像发送给与其对应的显示设备进行显示,以通过至少两个所述显示设备显示所述多个场景图像。Each scene image is sent to its corresponding display device for display, so as to display the plurality of scene images through at least two of the display devices.
在一种可能的实现方式中,所述仿真器1201还用于:In a possible implementation manner, the emulator 1201 is also used for:
在将场景图像发送给与其对应的显示设备进行显示之前,根据所述视觉传感器对应的标定参数,对所述多个场景图像中的至少部分图像进行图像转换。Before sending the scene image to the corresponding display device for display, image conversion is performed on at least part of the plurality of scene images according to the calibration parameter corresponding to the visual sensor.
在一种可能的实现方式中,所述仿真器1201在根据所述相对位姿生成多个场景图像时,具体用于:In a possible implementation manner, when the simulator 1201 generates multiple scene images according to the relative pose, it is specifically used for:
根据所述可移动平台与所述场景元素之间的相对位姿、立体视觉参数以及所述视觉传感器对应的标定参数,生成多个场景图像。A plurality of scene images are generated according to the relative poses between the movable platform and the scene elements, stereo vision parameters, and calibration parameters corresponding to the vision sensor.
在一种可能的实现方式中,所述仿真器1201还用于:In a possible implementation manner, the emulator 1201 is also used for:
通过相机标定法对所述视觉传感器进行标定,以确定所述视觉传感器的标定参数。The vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
在一种可能的实现方式中,所述显示设备为3D显示设备,所述3D显示设备通过3D投影方式显示所述多个场景图像。In a possible implementation manner, the display device is a 3D display device, and the 3D display device displays the plurality of scene images in a 3D projection manner.
在一种可能的实现方式中,所述系统还包括:光学系统;In a possible implementation manner, the system further includes: an optical system;
所述光学系统设置在所述显示设备与所述视觉传感器之间,所述光学系统用于对所述显示设备输出的场景图像进行光学转换,使得转换后的场景图像匹配所述视觉传感器的视场角。The optical system is arranged between the display device and the visual sensor, and the optical system is used to perform optical conversion on the scene image output by the display device, so that the converted scene image matches the visual field of the visual sensor. field angle.
在一种可能的实现方式中,所述仿真器1201生成的场景图像为单一场景图像,所述图像输出设备1202具体用于根据所述单一场景图像输出所述多个场景图像。In a possible implementation manner, the scene image generated by the simulator 1201 is a single scene image, and the image output device 1202 is specifically configured to output the multiple scene images according to the single scene image.
在一种可能的实现方式中,所述仿真器1201还用于:In a possible implementation manner, the emulator 1201 is also used for:
根据所述可移动平台与所述场景元素之间的相对位姿,确定对应的传感信息;Determine corresponding sensing information according to the relative pose between the movable platform and the scene element;
输出所述传感信息,以使所述控制系统根据所述多个场景图像以及所述传感信息确定对应的控制信号。The sensing information is output, so that the control system determines a corresponding control signal according to the plurality of scene images and the sensing information.
在一种可能的实现方式中,所述传感信息包括所述场景元素对应的点云数据。In a possible implementation manner, the sensing information includes point cloud data corresponding to the scene element.
在一种可能的实现方式中,所述仿真器1201还用于:In a possible implementation manner, the emulator 1201 is also used for:
根据所述控制信号,确定所述可移动平台的运行状态;determining the operating state of the movable platform according to the control signal;
根据所述可移动平台的运行状态,对所述控制系统进行评价。The control system is evaluated according to the operating state of the movable platform.
在一种可能的实现方式中,所述可移动平台为车辆,所述控制系统为应用于车辆的驾驶控制系统。In a possible implementation manner, the movable platform is a vehicle, and the control system is a driving control system applied to the vehicle.
本实施例提供的仿真测试系统,可以用于执行图2所示的仿真测试方法,其具体实现原理和效果均可以参见前述实施例,此处不再赘述。The simulation testing system provided in this embodiment can be used to execute the simulation testing method shown in FIG. 2 , and the specific implementation principles and effects thereof can refer to the foregoing embodiments, which will not be repeated here.
图12B为本申请实施例提供的另一种仿真测试系统的结构示意图。如图12B所示,所述系统可以包括:仿真器1211和图像输出设备1222;FIG. 12B is a schematic structural diagram of another simulation testing system provided by an embodiment of the present application. As shown in FIG. 12B , the system may include: an emulator 1211 and an image output device 1222;
所述仿真器1211用于获取待测试的控制系统输出的控制信号,根据所述控制信号以及虚拟的场景模型,确定对应的场景图像;其中,所述虚拟的场景模型包括多个场景元素;The simulator 1211 is used to obtain the control signal output by the control system to be tested, and determine the corresponding scene image according to the control signal and the virtual scene model; wherein, the virtual scene model includes a plurality of scene elements;
所述图像输出设备1212用于获取所述场景图像,并根据所述场景图像输出至少两个场景图像;The image output device 1212 is configured to acquire the scene image, and output at least two scene images according to the scene image;
其中,所述虚拟的场景模型包括多个场景元素;Wherein, the virtual scene model includes a plurality of scene elements;
所述至少两个场景图像包括第一场景图像和第二场景图像,所述第一场景图像包括第一像素区域,所述第二场景图像包括第二像素区域,所述第一像素区域和所述第二像素区域描述同一所述场景元素;所述第一像素区域在所述第一场景图像中的位置,与所述第二像素区域在所述第二场景图像中的位置之间的位置偏差,根据可移动平台的视觉传感器之间的相对位姿确定;The at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image The deviation is determined according to the relative pose between the visual sensors of the movable platform;
所述控制信号为所述控制系统根据历史输入的至少两个场景图像和预设的控制模型输出的用于控制所述可移动平台的控制信号;The control signal is a control signal for controlling the movable platform output by the control system according to the historically input at least two scene images and a preset control model;
输出的多个所述场景图像用于表征所述可移动平台响应于所述控制信号运动,与所述场景元素之间的相对位姿改变后,所述可移动平台的所述视觉传感器所观测到的所述场景元素的场景图像。The outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene elements is changed, observed by the vision sensor of the movable platform to the scene image of the scene element.
在一种可能的实现方式中,所述仿真器1211确定的场景图像包括所述至少两个场景图像。In a possible implementation manner, the scene image determined by the simulator 1211 includes the at least two scene images.
在一种可能的实现方式中,所述仿真器1211具体用于:In a possible implementation manner, the emulator 1211 is specifically used for:
获取待测试的控制系统输出的控制信号,根据所述控制信号、虚拟的场景模型以及立体视觉参数,确定至少两个场景图像。The control signal output by the control system to be tested is acquired, and at least two scene images are determined according to the control signal, the virtual scene model and the stereo vision parameters.
在一种可能的实现方式中,所述仿真器1211在根据所述控制信号、虚拟的场景模型以及立体视觉参数,确定至少两个场景图像时,具体用于:In a possible implementation manner, when the simulator 1211 determines at least two scene images according to the control signal, the virtual scene model and the stereo vision parameters, it is specifically used for:
根据所述控制信号和虚拟的场景模型,确定所述可移动平台与所述场景元素的相对位姿;determining the relative pose of the movable platform and the scene element according to the control signal and the virtual scene model;
根据所述可移动平台与所述场景元素的相对位姿以及立体视觉参数,确定至少两个场景图像。At least two scene images are determined according to the relative poses of the movable platform and the scene elements and stereo vision parameters.
在一种可能的实现方式中,所述立体视觉参数由所述可移动平台的视觉传感器的安装位姿和/或所述可移动平台的视觉传感器之间的相对位姿确定。In a possible implementation manner, the stereo vision parameter is determined by the installation pose of the vision sensor of the movable platform and/or the relative pose between the vision sensors of the movable platform.
在一种可能的实现方式中,所述图像输出设备1212包括图像输出接口,所述仿真器1211还用于:In a possible implementation manner, the image output device 1212 includes an image output interface, and the emulator 1211 is further used for:
将确定的至少两个场景图像发送给所述图像输出接口,以使所述图像输出接口将所述至少两个场景图像发送给所述控制系统。Sending the determined at least two scene images to the image output interface, so that the image output interface sends the at least two scene images to the control system.
在一种可能的实现方式中,所述仿真器1211在将确定的至少两个场景图像发送给所述图像输出接口时,具体用于:In a possible implementation manner, when the simulator 1211 sends the determined at least two scene images to the image output interface, it is specifically used for:
将所述至少两个场景图像转换为预设格式的至少两个场景图像;converting the at least two scene images into at least two scene images in a preset format;
将所述预设格式的至少两个场景图像发送给所述图像输出接口。Sending at least two scene images in the preset format to the image output interface.
在一种可能的实现方式中,所述图像输出设备1212包括用于显示所述至少两个场景图像的显示设备;In a possible implementation manner, the image output device 1212 includes a display device for displaying the at least two scene images;
所述仿真器1211还用于:将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像,使得所述控制系统根据至少两个视觉传感器拍摄到的图像和预设的控制模型确定对应的控制信号;The simulator 1211 is further configured to: send the at least two scene images to a display device, so as to display the at least two scene images through the display device, so that the control system captures the scene images according to the at least two visual sensors. The image and the preset control model determine the corresponding control signal;
其中,所述视觉传感器的数量和输出的场景图像的数量相同;所述至少两个视觉传感器与所述至少两个场景图像一一对应,所述视觉传感器用于对对应的场景图像进行拍摄。The number of the visual sensors is the same as the number of output scene images; the at least two visual sensors are in one-to-one correspondence with the at least two scene images, and the visual sensors are used to photograph the corresponding scene images.
在一种可能的实现方式中,所述显示设备的数量与所述视觉传感器的数量相同,至少两个所述显示设备与所述至少两个视觉传感器一一对应,每一视觉传感器用于对对应的显示设备显示的画面进行拍摄;In a possible implementation manner, the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is used for Shoot the picture displayed by the corresponding display device;
相应的,所述仿真器1211在将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像时,具体用于:Correspondingly, when the emulator 1211 sends the at least two scene images to a display device to display the at least two scene images through the display device, it is specifically used for:
将每一场景图像发送给与其对应的显示设备进行显示,以通过至少两个所述显示设备显示所述至少两个场景图像。Each scene image is sent to its corresponding display device for display, so as to display the at least two scene images through at least two of the display devices.
在一种可能的实现方式中,所述仿真器1211还用于:In a possible implementation manner, the emulator 1211 is also used for:
在将场景图像发送给与其对应的显示设备进行显示之前,根据所述视觉传感器对应的标定参数,对所述至少两个场景图像中的至少部分图像进行图像转换。Before sending the scene image to the corresponding display device for display, image conversion is performed on at least part of the at least two scene images according to the calibration parameters corresponding to the visual sensor.
在一种可能的实现方式中,所述仿真器1211在根据所述控制信号以及虚 拟的场景模型,确定对应的场景图像时,具体用于:In a possible implementation manner, when the simulator 1211 determines the corresponding scene image according to the control signal and the virtual scene model, it is specifically used for:
根据所述控制信号、虚拟的场景模型、立体视觉参数以及所述视觉传感器对应的标定参数,确定至少两个场景图像。At least two scene images are determined according to the control signal, the virtual scene model, the stereo vision parameters, and the calibration parameters corresponding to the vision sensor.
在一种可能的实现方式中,所述仿真器1211还用于:In a possible implementation manner, the emulator 1211 is also used for:
通过相机标定法对所述视觉传感器进行标定,以确定所述视觉传感器的标定参数。The vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
在一种可能的实现方式中,所述显示设备为3D显示设备,所述3D显示设备通过3D投影方式显示所述至少两个场景图像。In a possible implementation manner, the display device is a 3D display device, and the 3D display device displays the at least two scene images in a 3D projection manner.
在一种可能的实现方式中,所述系统还包括:光学系统;In a possible implementation manner, the system further includes: an optical system;
所述光学系统设置在所述显示设备与所述视觉传感器之间,所述光学系统用于对所述显示设备输出的场景图像进行光学转换,使得转换后的场景图像匹配所述视觉传感器的视场角。The optical system is arranged between the display device and the visual sensor, and the optical system is used to perform optical conversion on the scene image output by the display device, so that the converted scene image matches the visual field of the visual sensor. field angle.
在一种可能的实现方式中,所述仿真器1211确定的场景图像为单一场景图像,所述图像输出设备1212具体用于根据所述单一场景图像输出所述至少两个场景图像。In a possible implementation manner, the scene image determined by the simulator 1211 is a single scene image, and the image output device 1212 is specifically configured to output the at least two scene images according to the single scene image.
在一种可能的实现方式中,所述仿真器1211还用于:In a possible implementation manner, the emulator 1211 is also used for:
根据所述控制信号以及虚拟的场景模型,确定对应的传感信息;Determine corresponding sensing information according to the control signal and the virtual scene model;
输出所述传感信息,以使所述控制系统根据所述至少两个场景图像以及所述传感信息确定对应的控制信号。The sensing information is output, so that the control system determines a corresponding control signal according to the at least two scene images and the sensing information.
在一种可能的实现方式中,所述传感信息包括所述场景元素对应的点云数据;所述仿真器1211在根据所述控制信号以及虚拟的场景模型,确定对应的传感信息时,具体用于:In a possible implementation manner, the sensing information includes point cloud data corresponding to the scene element; when the simulator 1211 determines the corresponding sensing information according to the control signal and the virtual scene model, Specifically for:
根据所述控制信号以及虚拟的场景模型,确定所述可移动平台与所述场景元素之间的相对位姿;determining the relative pose between the movable platform and the scene element according to the control signal and the virtual scene model;
根据所述可移动平台与所述场景元素之间的相对位姿确定点云数据。Point cloud data is determined according to the relative pose between the movable platform and the scene element.
在一种可能的实现方式中,所述仿真器1211还用于:In a possible implementation manner, the emulator 1211 is also used for:
根据所述控制信号,确定所述可移动平台的运行状态;determining the operating state of the movable platform according to the control signal;
根据所述可移动平台的运行状态,对所述控制系统进行评价。The control system is evaluated according to the operating state of the movable platform.
在一种可能的实现方式中,所述控制系统为应用于车辆的驾驶控制系统,所述可移动平台为车辆。In a possible implementation manner, the control system is a driving control system applied to a vehicle, and the movable platform is a vehicle.
本实施例提供的仿真测试系统,可以用于执行图3A至图10所示实施例所述的仿真测试方法,其具体实现原理和效果均可以参见前述实施例,此处不再赘述。The simulation testing system provided in this embodiment can be used to execute the simulation testing methods described in the embodiments shown in FIG. 3A to FIG. 10 , and the specific implementation principles and effects can be referred to the foregoing embodiments, which will not be repeated here.
图13为本申请实施例提供的一种仿真器的结构示意图。如图13所示,所述仿真器包括:存储器1301和至少一个处理器1302;FIG. 13 is a schematic structural diagram of an emulator provided by an embodiment of the present application. As shown in FIG. 13 , the emulator includes: a memory 1301 and at least one processor 1302;
所述存储器1301存储计算机执行指令;The memory 1301 stores computer-executed instructions;
所述至少一个处理器1302执行所述存储器1301存储的计算机执行指令,使得所述至少一个处理器1302执行上述任一实施例所述的方法。The at least one processor 1302 executes the computer-executable instructions stored in the memory 1301, so that the at least one processor 1302 executes the method described in any of the foregoing embodiments.
可选地,上述存储器1301既可以是独立的,也可以跟处理器1302集成在一起。Optionally, the above-mentioned memory 1301 may be independent or integrated with the processor 1302 .
当存储器1301独立设置时,该仿真器还可以包括总线,用于连接存储器1301和处理器1302。When the memory 1301 is set independently, the emulator may further include a bus for connecting the memory 1301 and the processor 1302.
本申请实施例还一种计算机可读存储介质,所述可读存储介质上存储有计算机程序;所述计算机程序在被执行时,实现上述任一实施例所述的方法。An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the readable storage medium; when the computer program is executed, the method described in any of the foregoing embodiments is implemented.
本申请实施例还一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述任一实施例所述的方法。An embodiment of the present application also provides a computer program product, including a computer program, which implements the method described in any of the foregoing embodiments when the computer program is executed by a processor.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:只读内存(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the steps of implementing the above method embodiments can be completed by program instructions related to hardware, the aforementioned program can be stored in a computer-readable storage medium, and when the program is executed, execute Including the steps of the above-mentioned method embodiments; and the aforementioned storage medium includes: read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks and other various programs that can store program codes medium.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are only illustrative. For example, the division of the modules is only a logical function division. In actual implementation, there may be other division methods. For example, multiple modules may be combined or integrated. to another system, or some features can be ignored, or not implemented.
上述以软件功能模块的形式实现的集成的模块,可以存储在一个计算机可读取存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指 令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的部分步骤。The above-mentioned integrated modules implemented in the form of software functional modules may be stored in a computer-readable storage medium. The above-mentioned software function modules are stored in a storage medium, and include several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute part of the steps of the methods described in the various embodiments of the present application.
应理解,上述处理器可以是中央处理单元(Central Processing Unit,简称CPU),还可以是其它通用处理器、数字信号处理器(Digital Signal Processor,简称DSP)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合发明所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。It should be understood that the above-mentioned processor may be a central processing unit (Central Processing Unit, referred to as CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, referred to as DSP), application specific integrated circuit (Application Specific Integrated Circuit, Referred to as ASIC) and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in conjunction with the invention can be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
存储器可能包含高速RAM存储器,也可能还包括非易失性存储NVM,例如至少一个磁盘存储器,还可以为U盘、移动硬盘、只读存储器、磁盘或光盘等。The memory may include high-speed RAM memory, and may also include non-volatile storage NVM, such as at least one magnetic disk memory, and may also be a U disk, a removable hard disk, a read-only memory, a magnetic disk or an optical disk, and the like.
一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于专用集成电路(Application Specific Integrated Circuits,简称ASIC)中。当然,处理器和存储介质也可以作为分立组件存在于电子设备或主控设备中。An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium. Of course, the storage medium can also be an integral part of the processor. The processor and the storage medium may be located in Application Specific Integrated Circuits (ASIC for short). Of course, the processor and the storage medium may also exist in the electronic device or the host device as discrete components.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that, herein, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or device comprising a series of elements includes not only those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions described in the foregoing embodiments can still be modified, or some or all of the technical features thereof can be equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application. scope.

Claims (73)

  1. 一种仿真测试方法,其特征在于,所述方法应用于仿真测试系统;所述仿真测试系统用于基于虚拟的场景模型测试可移动平台的控制系统,所述场景模型包括多个场景元素;A simulation test method, characterized in that the method is applied to a simulation test system; the simulation test system is used to test a control system of a movable platform based on a virtual scene model, and the scene model includes a plurality of scene elements;
    所述控制系统用于,基于至少两个视觉传感器观测到的场景图像,输出用于控制所述可移动平台的控制信号;所述方法包括:The control system is configured to, based on scene images observed by at least two visual sensors, output a control signal for controlling the movable platform; the method includes:
    获取所述控制系统输出的所述可移动平台的控制信号;acquiring the control signal of the movable platform output by the control system;
    基于所述控制信号,模拟所述可移动平台在所述场景模型中运动,得到所述可移动平台与所述场景元素之间的相对位姿;Based on the control signal, simulate the movement of the movable platform in the scene model to obtain the relative pose between the movable platform and the scene element;
    根据所述相对位姿生成多个场景图像,所述多个场景图像包括所述可移动平台在所述场景模型中运动时,至少两个所述视觉传感器观测到的场景图像;generating a plurality of scene images according to the relative pose, the plurality of scene images including scene images observed by at least two of the vision sensors when the movable platform moves in the scene model;
    输出所述多个场景图像,以使所述控制系统根据所述多个场景图像生成对应的控制信号。The plurality of scene images are output, so that the control system generates corresponding control signals according to the plurality of scene images.
  2. 根据权利要求1所述的方法,其特征在于,根据所述相对位姿生成多个场景图像,包括:The method according to claim 1, wherein generating a plurality of scene images according to the relative pose, comprising:
    根据所述可移动平台与所述场景元素之间的相对位姿,确定所述可移动平台可观测到的场景元素的成像变化,并基于所述成像变化生成多个场景图像。According to the relative pose between the movable platform and the scene element, an imaging change of the scene element observable by the movable platform is determined, and a plurality of scene images are generated based on the imaging change.
  3. 根据权利要求1所述的方法,其特征在于,根据所述相对位姿生成多个场景图像,包括:The method according to claim 1, wherein generating a plurality of scene images according to the relative pose, comprising:
    根据所述可移动平台与所述场景元素之间的相对位姿以及立体视觉参数,生成所述多个场景图像。The plurality of scene images are generated according to the relative pose and stereo vision parameters between the movable platform and the scene elements.
  4. 根据权利要求3所述的方法,其特征在于,所述立体视觉参数由所述至少两个视觉传感器的安装位姿和/或所述至少两个视觉传感器之间的相对位姿确定。The method according to claim 3, wherein the stereo vision parameter is determined by the installation pose of the at least two vision sensors and/or the relative pose between the at least two vision sensors.
  5. 根据权利要求1所述的方法,其特征在于,输出所述多个场景图像,包括:The method according to claim 1, wherein outputting the plurality of scene images comprises:
    将所述多个场景图像通过图像输出接口发送给所述控制系统。Sending the plurality of scene images to the control system through an image output interface.
  6. 根据权利要求5所述的方法,其特征在于,将所述多个场景图像通过 图像输出接口发送给所述控制系统,包括:The method according to claim 5, wherein the plurality of scene images are sent to the control system through an image output interface, comprising:
    将所述多个场景图像转换为预设格式的多个场景图像;converting the plurality of scene images into a plurality of scene images in a preset format;
    将所述预设格式的多个场景图像通过图像输出接口发送给所述控制系统。Sending the plurality of scene images in the preset format to the control system through an image output interface.
  7. 根据权利要求1所述的方法,其特征在于,输出所述多个场景图像,以使所述控制系统根据所述多个场景图像生成对应的控制信号,包括:The method according to claim 1, wherein outputting the plurality of scene images, so that the control system generates corresponding control signals according to the plurality of scene images, comprises:
    将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像,使得所述控制系统基于所述至少两个视觉传感器拍摄所述显示设备得到的图像输出对应的控制信号;Send the plurality of scene images to a display device to display the plurality of scene images through the display device, so that the control system outputs corresponding images based on the at least two visual sensors photographing the display device. control signal;
    其中,所述视觉传感器的数量和显示的场景图像的数量相同;所述至少两个视觉传感器与所述多个场景图像一一对应,所述视觉传感器用于对对应的场景图像进行拍摄。The number of the visual sensors is the same as the number of displayed scene images; the at least two visual sensors are in one-to-one correspondence with the plurality of scene images, and the visual sensors are used to photograph the corresponding scene images.
  8. 根据权利要求7所述的方法,其特征在于,所述显示设备的数量与所述视觉传感器的数量相同,至少两个所述显示设备与所述至少两个视觉传感器一一对应,每一视觉传感器用于对对应的显示设备显示的画面进行拍摄;The method according to claim 7, wherein the number of the display devices is the same as the number of the visual sensors, and at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is in one-to-one correspondence. The sensor is used to shoot the picture displayed by the corresponding display device;
    相应的,将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像,包括:Correspondingly, sending the plurality of scene images to a display device, so as to display the plurality of scene images through the display device, includes:
    将每一场景图像发送给与其对应的显示设备进行显示,以通过至少两个所述显示设备显示所述多个场景图像。Each scene image is sent to its corresponding display device for display, so as to display the plurality of scene images through at least two of the display devices.
  9. 根据权利要求8所述的方法,其特征在于,在将每一场景图像发送给与其对应的显示设备进行显示之前,还包括:The method according to claim 8, wherein before sending each scene image to its corresponding display device for display, the method further comprises:
    根据所述视觉传感器对应的标定参数,对所述多个场景图像中的至少部分图像进行图像转换。Image conversion is performed on at least part of the multiple scene images according to the calibration parameters corresponding to the visual sensor.
  10. 根据权利要求8所述的方法,其特征在于,根据所述相对位姿生成多个场景图像,包括:The method according to claim 8, wherein generating a plurality of scene images according to the relative pose, comprising:
    根据所述可移动平台与所述场景元素之间的相对位姿、立体视觉参数以及所述视觉传感器对应的标定参数,生成多个场景图像。A plurality of scene images are generated according to the relative poses between the movable platform and the scene elements, stereo vision parameters, and calibration parameters corresponding to the vision sensor.
  11. 根据权利要求9或10所述的方法,其特征在于,还包括:The method according to claim 9 or 10, further comprising:
    通过相机标定法对所述视觉传感器进行标定,以确定所述视觉传感器的标定参数。The vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
  12. 根据权利要求7所述的方法,其特征在于,所述显示设备为3D显 示设备;The method according to claim 7, wherein the display device is a 3D display device;
    将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像,包括:Sending the plurality of scene images to a display device to display the plurality of scene images through the display device, including:
    将所述多个场景图像发送给所述3D显示设备,以使所述3D显示设备通过3D投影方式显示所述多个场景图像。Sending the plurality of scene images to the 3D display device, so that the 3D display device displays the plurality of scene images through 3D projection.
  13. 根据权利要求1-10任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-10, wherein the method further comprises:
    根据所述可移动平台与所述场景元素之间的相对位姿,确定对应的传感信息;Determine corresponding sensing information according to the relative pose between the movable platform and the scene element;
    输出所述传感信息,以使所述控制系统根据所述多个场景图像以及所述传感信息确定对应的控制信号。The sensing information is output, so that the control system determines a corresponding control signal according to the plurality of scene images and the sensing information.
  14. 根据权利要求13所述的方法,其特征在于,所述传感信息包括所述场景元素对应的点云数据。The method according to claim 13, wherein the sensing information comprises point cloud data corresponding to the scene element.
  15. 根据权利要求1-10任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-10, wherein the method further comprises:
    根据所述控制信号,确定所述可移动平台的运行状态;determining the operating state of the movable platform according to the control signal;
    根据所述可移动平台的运行状态,对所述控制系统进行评价。The control system is evaluated according to the operating state of the movable platform.
  16. 根据权利要求1-10任一项所述的方法,其特征在于,所述可移动平台为车辆,所述控制系统为应用于车辆的驾驶控制系统。The method according to any one of claims 1-10, wherein the movable platform is a vehicle, and the control system is a driving control system applied to the vehicle.
  17. 一种仿真测试方法,其特征在于,包括:A simulation testing method, comprising:
    获取待测试的控制系统输出的控制信号,所述控制信号为所述控制系统根据历史输入的场景图像和预设的控制模型输出的用于控制可移动平台的控制信号;obtaining a control signal output by the control system to be tested, where the control signal is a control signal for controlling the movable platform output by the control system according to the historically input scene image and a preset control model;
    根据所述控制信号以及虚拟的场景模型,输出至少两个场景图像;outputting at least two scene images according to the control signal and the virtual scene model;
    其中,所述场景模型包括多个场景元素;Wherein, the scene model includes multiple scene elements;
    所述至少两个场景图像包括第一场景图像和第二场景图像,所述第一场景图像包括第一像素区域,所述第二场景图像包括第二像素区域,所述第一像素区域和所述第二像素区域描述同一所述场景元素;所述第一像素区域在所述第一场景图像中的位置,与所述第二像素区域在所述第二场景图像中的 位置之间的位置偏差,根据所述可移动平台的视觉传感器之间的相对位姿确定;The at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image The deviation is determined according to the relative pose between the visual sensors of the movable platform;
    输出的多个所述场景图像用于表征所述可移动平台响应于所述控制信号运动,与所述场景元素之间的相对位姿改变后,所述可移动平台的所述视觉传感器所观测到的所述场景元素的场景图像。The outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene elements is changed, observed by the vision sensor of the movable platform to the scene image of the scene element.
  18. 根据权利要求17所述的方法,其特征在于,根据所述控制信号以及虚拟的场景模型,输出至少两个场景图像,包括:The method according to claim 17, wherein, outputting at least two scene images according to the control signal and the virtual scene model, comprising:
    根据所述控制信号、虚拟的场景模型以及立体视觉参数,确定至少两个场景图像;determining at least two scene images according to the control signal, the virtual scene model and the stereo vision parameters;
    输出所述至少两个场景图像。The at least two scene images are output.
  19. 根据权利要求18所述的方法,其特征在于,根据所述控制信号、虚拟的场景模型以及立体视觉参数,确定至少两个场景图像,包括:The method according to claim 18, wherein determining at least two scene images according to the control signal, the virtual scene model and the stereo vision parameters, comprising:
    根据所述控制信号和虚拟的场景模型,确定所述可移动平台与所述场景元素的相对位姿;determining the relative pose of the movable platform and the scene element according to the control signal and the virtual scene model;
    根据所述可移动平台与所述场景元素的相对位姿以及立体视觉参数,确定至少两个场景图像。At least two scene images are determined according to the relative poses of the movable platform and the scene elements and stereo vision parameters.
  20. 根据权利要求18所述的方法,其特征在于,所述立体视觉参数由所述可移动平台的视觉传感器的安装位姿和/或所述可移动平台的视觉传感器之间的相对位姿确定。The method according to claim 18, wherein the stereo vision parameter is determined by the installation pose of the vision sensor of the movable platform and/or the relative pose between the vision sensors of the movable platform.
  21. 根据权利要求17所述的方法,其特征在于,输出至少两个场景图像,包括:The method according to claim 17, wherein outputting at least two scene images comprises:
    将所述至少两个场景图像通过图像输出接口发送给所述控制系统。The at least two scene images are sent to the control system through an image output interface.
  22. 根据权利要求21所述的方法,其特征在于,将所述至少两个场景图像通过图像输出接口发送给所述控制系统,包括:The method according to claim 21, wherein sending the at least two scene images to the control system through an image output interface comprises:
    将所述至少两个场景图像转换为预设格式的至少两个场景图像;converting the at least two scene images into at least two scene images in a preset format;
    将所述预设格式的至少两个场景图像通过图像输出接口发送给所述控制系统。The at least two scene images in the preset format are sent to the control system through an image output interface.
  23. 根据权利要求17所述的方法,其特征在于,输出至少两个场景图像,包括:The method according to claim 17, wherein outputting at least two scene images comprises:
    将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所 述至少两个场景图像,使得所述控制系统根据至少两个视觉传感器拍摄到的图像和预设的控制模型确定对应的控制信号;Send the at least two scene images to a display device, so that the at least two scene images are displayed by the display device, so that the control system determines according to the images captured by the at least two visual sensors and a preset control model Corresponding control signal;
    其中,所述视觉传感器的数量和输出的场景图像的数量相同;所述至少两个视觉传感器与所述至少两个场景图像一一对应,所述视觉传感器用于对对应的场景图像进行拍摄。The number of the visual sensors is the same as the number of output scene images; the at least two visual sensors are in one-to-one correspondence with the at least two scene images, and the visual sensors are used to photograph the corresponding scene images.
  24. 根据权利要求23所述的方法,其特征在于,所述显示设备的数量与所述视觉传感器的数量相同,至少两个所述显示设备与所述至少两个视觉传感器一一对应,每一视觉传感器用于对对应的显示设备显示的画面进行拍摄;The method according to claim 23, wherein the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is in one-to-one correspondence. The sensor is used to shoot the picture displayed by the corresponding display device;
    相应的,将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像,包括:Correspondingly, sending the at least two scene images to a display device, so as to display the at least two scene images through the display device, includes:
    将每一场景图像发送给与其对应的显示设备进行显示,以通过至少两个所述显示设备显示所述至少两个场景图像。Each scene image is sent to its corresponding display device for display, so as to display the at least two scene images through at least two of the display devices.
  25. 根据权利要求24所述的方法,其特征在于,在将场景图像发送给与其对应的显示设备进行显示之前,还包括:The method according to claim 24, wherein before sending the scene image to the corresponding display device for display, it further comprises:
    根据所述视觉传感器对应的标定参数,对所述至少两个场景图像中的至少部分图像进行图像转换。Image conversion is performed on at least part of the at least two scene images according to the calibration parameters corresponding to the visual sensor.
  26. 根据权利要求24所述的方法,其特征在于,根据所述控制信号以及虚拟的场景模型,输出至少两个场景图像,包括:The method according to claim 24, wherein outputting at least two scene images according to the control signal and the virtual scene model, comprising:
    根据所述控制信号、虚拟的场景模型、立体视觉参数以及所述视觉传感器对应的标定参数,确定至少两个场景图像;Determine at least two scene images according to the control signal, the virtual scene model, the stereo vision parameters, and the calibration parameters corresponding to the vision sensor;
    输出所述至少两个场景图像。The at least two scene images are output.
  27. 根据权利要求25或26所述的方法,其特征在于,还包括:The method of claim 25 or 26, further comprising:
    通过相机标定法对所述视觉传感器进行标定,以确定所述视觉传感器的标定参数。The vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
  28. 根据权利要求23所述的方法,其特征在于,所述显示设备为3D显示设备;The method according to claim 23, wherein the display device is a 3D display device;
    将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像,包括:Sending the at least two scene images to a display device to display the at least two scene images through the display device includes:
    将所述至少两个场景图像发送给所述3D显示设备,以使所述3D显示设备通过3D投影方式显示所述至少两个场景图像。The at least two scene images are sent to the 3D display device, so that the 3D display device displays the at least two scene images by means of 3D projection.
  29. 根据权利要求17-26任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 17-26, wherein the method further comprises:
    根据所述控制信号以及虚拟的场景模型,确定对应的传感信息;Determine corresponding sensing information according to the control signal and the virtual scene model;
    输出所述传感信息,以使所述控制系统根据所述至少两个场景图像以及所述传感信息确定对应的控制信号。The sensing information is output, so that the control system determines a corresponding control signal according to the at least two scene images and the sensing information.
  30. 根据权利要求29所述的方法,其特征在于,所述传感信息包括所述场景元素对应的点云数据;The method according to claim 29, wherein the sensing information comprises point cloud data corresponding to the scene element;
    根据所述控制信号以及虚拟的场景模型,确定对应的传感信息,包括:According to the control signal and the virtual scene model, the corresponding sensing information is determined, including:
    根据所述控制信号以及虚拟的场景模型,确定所述可移动平台与所述场景元素之间的相对位姿;determining the relative pose between the movable platform and the scene element according to the control signal and the virtual scene model;
    根据所述可移动平台与所述场景元素之间的相对位姿确定点云数据。Point cloud data is determined according to the relative pose between the movable platform and the scene element.
  31. 根据权利要求17-26任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 17-26, wherein the method further comprises:
    根据所述控制信号,确定所述可移动平台的运行状态;determining the operating state of the movable platform according to the control signal;
    根据所述可移动平台的运行状态,对所述控制系统进行评价。The control system is evaluated according to the operating state of the movable platform.
  32. 根据权利要求17-26任一项所述的方法,其特征在于,所述控制系统为应用于车辆的驾驶控制系统,所述可移动平台为车辆。The method according to any one of claims 17-26, wherein the control system is a driving control system applied to a vehicle, and the movable platform is a vehicle.
  33. 一种仿真测试系统,其特征在于,所述仿真测试系统用于基于虚拟的场景模型测试可移动平台的控制系统,所述场景模型包括多个场景元素;A simulation test system, characterized in that the simulation test system is used to test a control system of a movable platform based on a virtual scene model, and the scene model includes a plurality of scene elements;
    所述控制系统用于,基于至少两个视觉传感器观测到的场景图像,输出用于控制所述可移动平台的控制信号;The control system is configured to output a control signal for controlling the movable platform based on scene images observed by at least two visual sensors;
    所述仿真测试系统包括仿真器和图像输出设备;The simulation test system includes an emulator and an image output device;
    所述仿真器用于获取所述控制系统输出的所述可移动平台的控制信号,基于所述控制信号,模拟所述可移动平台在所述场景模型中运动,得到所述可移动平台与所述场景元素之间的相对位姿,并生成基于所述相对位姿可观测到的场景图像;The simulator is used to obtain the control signal of the movable platform output by the control system, and based on the control signal, simulate the movement of the movable platform in the scene model, and obtain the relationship between the movable platform and the relative poses between scene elements, and generate an observable scene image based on the relative poses;
    所述图像输出设备用于获取仿真器生成的场景图像,并根据获取到的场景图像输出多个场景图像,所述多个场景图像包括所述可移动平台在所述场景模型中运动时至少两个所述视觉传感器观测到的场景图像,以使所述控制 系统根据所述多个场景图像生成对应的控制信号。The image output device is configured to acquire the scene image generated by the simulator, and output a plurality of scene images according to the acquired scene image, and the plurality of scene images include at least two images when the movable platform moves in the scene model. scene images observed by the visual sensor, so that the control system generates corresponding control signals according to the plurality of scene images.
  34. 根据权利要求33所述的系统,其特征在于,所述仿真器在生成基于所述相对位姿可观测到的场景图像时,具体用于:The system according to claim 33, wherein when the simulator generates an observable scene image based on the relative pose, it is specifically used for:
    根据所述可移动平台与所述场景元素之间的相对位姿,生成所述多个场景图像。The plurality of scene images are generated based on relative poses between the movable platform and the scene elements.
  35. 根据权利要求34所述的系统,其特征在于,所述仿真器在根据所述相对位姿生成多个场景图像时,具体用于:The system according to claim 34, wherein when the simulator generates a plurality of scene images according to the relative pose, it is specifically used for:
    根据所述可移动平台与所述场景元素之间的相对位姿,确定所述可移动平台可观测到的场景元素的成像变化,并基于所述成像变化生成多个场景图像。According to the relative pose between the movable platform and the scene element, an imaging change of the scene element observable by the movable platform is determined, and a plurality of scene images are generated based on the imaging change.
  36. 根据权利要求34所述的系统,其特征在于,所述仿真器在根据所述相对位姿生成多个场景图像时,具体用于:The system according to claim 34, wherein when the simulator generates a plurality of scene images according to the relative pose, it is specifically used for:
    根据所述可移动平台与所述场景元素之间的相对位姿以及立体视觉参数,生成所述多个场景图像。The plurality of scene images are generated according to the relative pose and stereo vision parameters between the movable platform and the scene elements.
  37. 根据权利要求36所述的系统,其特征在于,所述立体视觉参数由所述至少两个视觉传感器的安装位姿和/或所述至少两个视觉传感器之间的相对位姿确定。The system of claim 36, wherein the stereo vision parameter is determined by the installation pose of the at least two vision sensors and/or the relative pose between the at least two vision sensors.
  38. 根据权利要求34所述的系统,其特征在于,所述图像输出设备包括图像输出接口,所述仿真器还用于:The system according to claim 34, wherein the image output device comprises an image output interface, and the emulator is further configured to:
    将所述多个场景图像发送给所述图像输出接口,以使所述图像输出接口将所述多个场景图像发送给所述控制系统。Sending the plurality of scene images to the image output interface, so that the image output interface sends the plurality of scene images to the control system.
  39. 根据权利要求38所述的系统,其特征在于,所述仿真器在将所述多个场景图像发送给所述图像输出接口时,具体用于:The system according to claim 38, wherein when the simulator sends the multiple scene images to the image output interface, it is specifically used for:
    将所述多个场景图像转换为预设格式的多个场景图像;converting the plurality of scene images into a plurality of scene images in a preset format;
    将所述预设格式的多个场景图像发送给所述图像输出接口。Sending the plurality of scene images in the preset format to the image output interface.
  40. 根据权利要求34所述的系统,其特征在于,所述图像输出设备包括用于显示所述多个场景图像的显示设备;The system of claim 34, wherein the image output device comprises a display device for displaying the plurality of scene images;
    所述仿真器还用于:将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像,使得所述控制系统根据至少两个视觉传感器拍摄所述显示设备得到的图像输出对应的控制信号;The simulator is further configured to: send the plurality of scene images to a display device, so as to display the plurality of scene images through the display device, so that the control system photographs the display device according to at least two visual sensors The obtained image outputs the corresponding control signal;
    其中,所述视觉传感器的数量和显示的场景图像的数量相同;所述至少两个视觉传感器与所述多个场景图像一一对应,所述视觉传感器用于对对应的场景图像进行拍摄。The number of the visual sensors is the same as the number of displayed scene images; the at least two visual sensors are in one-to-one correspondence with the plurality of scene images, and the visual sensors are used to photograph the corresponding scene images.
  41. 根据权利要求40所述的系统,其特征在于,所述显示设备的数量与所述视觉传感器的数量相同,至少两个所述显示设备与所述至少两个视觉传感器一一对应,每一视觉传感器用于对对应的显示设备显示的画面进行拍摄;The system according to claim 40, wherein the number of the display devices is the same as the number of the visual sensors, and at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is in one-to-one correspondence. The sensor is used to shoot the picture displayed by the corresponding display device;
    相应的,所述仿真器在将所述多个场景图像发送给显示设备,以通过所述显示设备显示所述多个场景图像时,具体用于:Correspondingly, when the simulator sends the multiple scene images to the display device to display the multiple scene images through the display device, the emulator is specifically used for:
    将每一场景图像发送给与其对应的显示设备进行显示,以通过至少两个所述显示设备显示所述多个场景图像。Each scene image is sent to its corresponding display device for display, so as to display the plurality of scene images through at least two of the display devices.
  42. 根据权利要求41所述的系统,其特征在于,所述仿真器还用于:The system of claim 41, wherein the emulator is further configured to:
    在将场景图像发送给与其对应的显示设备进行显示之前,根据所述视觉传感器对应的标定参数,对所述多个场景图像中的至少部分图像进行图像转换。Before sending the scene image to the corresponding display device for display, image conversion is performed on at least part of the plurality of scene images according to the calibration parameter corresponding to the visual sensor.
  43. 根据权利要求41所述的系统,其特征在于,所述仿真器在根据所述相对位姿生成多个场景图像时,具体用于:The system according to claim 41, wherein when the simulator generates a plurality of scene images according to the relative pose, it is specifically used for:
    根据所述可移动平台与所述场景元素之间的相对位姿、立体视觉参数以及所述视觉传感器对应的标定参数,生成多个场景图像。A plurality of scene images are generated according to the relative poses between the movable platform and the scene elements, stereo vision parameters, and calibration parameters corresponding to the vision sensor.
  44. 根据权利要求42或43所述的系统,其特征在于,所述仿真器还用于:The system according to claim 42 or 43, wherein the emulator is further used for:
    通过相机标定法对所述视觉传感器进行标定,以确定所述视觉传感器的标定参数。The vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
  45. 根据权利要求40所述的系统,其特征在于,所述显示设备为3D显示设备,所述3D显示设备通过3D投影方式显示所述多个场景图像。The system according to claim 40, wherein the display device is a 3D display device, and the 3D display device displays the plurality of scene images by means of 3D projection.
  46. 根据权利要求40所述的系统,其特征在于,还包括:光学系统;The system of claim 40, further comprising: an optical system;
    所述光学系统设置在所述显示设备与所述视觉传感器之间,所述光学系统用于对所述显示设备输出的场景图像进行光学转换,使得转换后的场景图像匹配所述视觉传感器的视场角。The optical system is arranged between the display device and the visual sensor, and the optical system is used to perform optical conversion on the scene image output by the display device, so that the converted scene image matches the visual field of the visual sensor. field angle.
  47. 根据权利要求33所述的系统,其特征在于,所述仿真器生成的场景图像为单一场景图像,所述图像输出设备具体用于根据所述单一场景图像输 出所述多个场景图像。The system according to claim 33, wherein the scene image generated by the simulator is a single scene image, and the image output device is specifically configured to output the plurality of scene images according to the single scene image.
  48. 根据权利要求33-43任一项所述的系统,其特征在于,所述仿真器还用于:The system according to any one of claims 33-43, wherein the emulator is further used for:
    根据所述可移动平台与所述场景元素之间的相对位姿,确定对应的传感信息;Determine corresponding sensing information according to the relative pose between the movable platform and the scene element;
    输出所述传感信息,以使所述控制系统根据所述多个场景图像以及所述传感信息确定对应的控制信号。The sensing information is output, so that the control system determines a corresponding control signal according to the plurality of scene images and the sensing information.
  49. 根据权利要求48所述的系统,其特征在于,所述传感信息包括所述场景元素对应的点云数据。The system of claim 48, wherein the sensing information includes point cloud data corresponding to the scene elements.
  50. 根据权利要求33-43任一项所述的系统,其特征在于,所述仿真器还用于:The system according to any one of claims 33-43, wherein the emulator is further used for:
    根据所述控制信号,确定所述可移动平台的运行状态;According to the control signal, determine the running state of the movable platform;
    根据所述可移动平台的运行状态,对所述控制系统进行评价。The control system is evaluated according to the operating state of the movable platform.
  51. 根据权利要求33-43任一项所述的系统,其特征在于,所述可移动平台为车辆,所述控制系统为应用于车辆的驾驶控制系统。The system according to any one of claims 33-43, wherein the movable platform is a vehicle, and the control system is a driving control system applied to the vehicle.
  52. 一种仿真测试系统,其特征在于,包括:仿真器和图像输出设备;A simulation test system, characterized in that it comprises: a simulator and an image output device;
    所述仿真器用于获取待测试的控制系统输出的控制信号,根据所述控制信号以及虚拟的场景模型,确定对应的场景图像;其中,所述虚拟的场景模型包括多个场景元素;The simulator is used to obtain the control signal output by the control system to be tested, and determine the corresponding scene image according to the control signal and the virtual scene model; wherein, the virtual scene model includes a plurality of scene elements;
    所述图像输出设备用于获取所述场景图像,并根据所述场景图像输出至少两个场景图像;The image output device is configured to acquire the scene image, and output at least two scene images according to the scene image;
    其中,所述虚拟的场景模型包括多个场景元素;Wherein, the virtual scene model includes a plurality of scene elements;
    所述至少两个场景图像包括第一场景图像和第二场景图像,所述第一场景图像包括第一像素区域,所述第二场景图像包括第二像素区域,所述第一像素区域和所述第二像素区域描述同一所述场景元素;所述第一像素区域在所述第一场景图像中的位置,与所述第二像素区域在所述第二场景图像中的位置之间的位置偏差,根据可移动平台的视觉传感器之间的相对位姿确定;The at least two scene images include a first scene image and a second scene image, the first scene image includes a first pixel area, the second scene image includes a second pixel area, the first pixel area and the The second pixel area describes the same scene element; the position of the first pixel area in the first scene image is between the position of the second pixel area in the second scene image The deviation is determined according to the relative pose between the visual sensors of the movable platform;
    所述控制信号为所述控制系统根据历史输入的至少两个场景图像和预设的控制模型输出的用于控制所述可移动平台的控制信号;The control signal is a control signal for controlling the movable platform output by the control system according to the historically input at least two scene images and a preset control model;
    输出的多个所述场景图像用于表征所述可移动平台响应于所述控制信号运动,与所述场景元素之间的相对位姿改变后,所述可移动平台的所述视觉传感器所观测到的所述场景元素的场景图像。The outputted multiple scene images are used to represent the movement of the movable platform in response to the control signal, and after the relative pose between the movable platform and the scene element is changed, the visual sensor of the movable platform observes to the scene image of the scene element.
  53. 根据权利要求52所述的系统,其特征在于,所述仿真器确定的场景图像包括所述至少两个场景图像。The system of claim 52, wherein the scene images determined by the simulator include the at least two scene images.
  54. 根据权利要求53所述的系统,其特征在于,所述仿真器具体用于:The system according to claim 53, wherein the emulator is specifically used for:
    获取待测试的控制系统输出的控制信号,根据所述控制信号、虚拟的场景模型以及立体视觉参数,确定至少两个场景图像。The control signal output by the control system to be tested is acquired, and at least two scene images are determined according to the control signal, the virtual scene model and the stereo vision parameters.
  55. 根据权利要求54所述的系统,其特征在于,所述仿真器在根据所述控制信号、虚拟的场景模型以及立体视觉参数,确定至少两个场景图像时,具体用于:The system according to claim 54, wherein when the simulator determines at least two scene images according to the control signal, the virtual scene model and the stereo vision parameters, it is specifically used for:
    根据所述控制信号和虚拟的场景模型,确定所述可移动平台与所述场景元素的相对位姿;determining the relative pose of the movable platform and the scene element according to the control signal and the virtual scene model;
    根据所述可移动平台与所述场景元素的相对位姿以及立体视觉参数,确定至少两个场景图像。At least two scene images are determined according to the relative poses of the movable platform and the scene elements and stereo vision parameters.
  56. 根据权利要求54所述的系统,其特征在于,所述立体视觉参数由所述可移动平台的视觉传感器的安装位姿和/或所述可移动平台的视觉传感器之间的相对位姿确定。The system of claim 54, wherein the stereo vision parameter is determined by the installation pose of the vision sensors of the movable platform and/or the relative pose between the vision sensors of the movable platform.
  57. 根据权利要求52所述的系统,其特征在于,所述图像输出设备包括图像输出接口,所述仿真器还用于:The system according to claim 52, wherein the image output device comprises an image output interface, and the emulator is further configured to:
    将确定的至少两个场景图像发送给所述图像输出接口,以使所述图像输出接口将所述至少两个场景图像发送给所述控制系统。Sending the determined at least two scene images to the image output interface, so that the image output interface sends the at least two scene images to the control system.
  58. 根据权利要求57所述的系统,其特征在于,所述仿真器在将确定的至少两个场景图像发送给所述图像输出接口时,具体用于:The system according to claim 57, wherein when the simulator sends the determined at least two scene images to the image output interface, it is specifically used for:
    将所述至少两个场景图像转换为预设格式的至少两个场景图像;converting the at least two scene images into at least two scene images in a preset format;
    将所述预设格式的至少两个场景图像发送给所述图像输出接口。Sending at least two scene images in the preset format to the image output interface.
  59. 根据权利要求53所述的系统,其特征在于,所述图像输出设备包括用于显示所述至少两个场景图像的显示设备;The system of claim 53, wherein the image output device comprises a display device for displaying the at least two scene images;
    所述仿真器还用于:将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像,使得所述控制系统根据至少两个 视觉传感器拍摄到的图像和预设的控制模型确定对应的控制信号;The simulator is further configured to: send the at least two scene images to a display device, so as to display the at least two scene images through the display device, so that the control system can perform the operation according to the images captured by the at least two visual sensors. The image and the preset control model determine the corresponding control signal;
    其中,所述视觉传感器的数量和输出的场景图像的数量相同;所述至少两个视觉传感器与所述至少两个场景图像一一对应,所述视觉传感器用于对对应的场景图像进行拍摄。The number of the visual sensors is the same as the number of output scene images; the at least two visual sensors are in one-to-one correspondence with the at least two scene images, and the visual sensors are used to photograph the corresponding scene images.
  60. 根据权利要求59所述的系统,其特征在于,所述显示设备的数量与所述视觉传感器的数量相同,至少两个所述显示设备与所述至少两个视觉传感器一一对应,每一视觉传感器用于对对应的显示设备显示的画面进行拍摄;The system according to claim 59, wherein the number of the display devices is the same as the number of the visual sensors, at least two of the display devices are in one-to-one correspondence with the at least two visual sensors, and each visual sensor is in one-to-one correspondence. The sensor is used to shoot the picture displayed by the corresponding display device;
    相应的,所述仿真器在将所述至少两个场景图像发送给显示设备,以通过所述显示设备显示所述至少两个场景图像时,具体用于:Correspondingly, when the emulator sends the at least two scene images to a display device to display the at least two scene images through the display device, the emulator is specifically used for:
    将每一场景图像发送给与其对应的显示设备进行显示,以通过至少两个所述显示设备显示所述至少两个场景图像。Each scene image is sent to its corresponding display device for display, so as to display the at least two scene images through at least two of the display devices.
  61. 根据权利要求60所述的系统,其特征在于,所述仿真器还用于:The system of claim 60, wherein the emulator is further configured to:
    在将场景图像发送给与其对应的显示设备进行显示之前,根据所述视觉传感器对应的标定参数,对所述至少两个场景图像中的至少部分图像进行图像转换。Before sending the scene image to the corresponding display device for display, image conversion is performed on at least part of the at least two scene images according to the calibration parameters corresponding to the visual sensor.
  62. 根据权利要求60所述的系统,其特征在于,所述仿真器在根据所述控制信号以及虚拟的场景模型,确定对应的场景图像时,具体用于:The system according to claim 60, wherein when the simulator determines the corresponding scene image according to the control signal and the virtual scene model, it is specifically used for:
    根据所述控制信号、虚拟的场景模型、立体视觉参数以及所述视觉传感器对应的标定参数,确定至少两个场景图像。At least two scene images are determined according to the control signal, the virtual scene model, the stereo vision parameters, and the calibration parameters corresponding to the vision sensor.
  63. 根据权利要求61或62所述的系统,其特征在于,所述仿真器还用于:The system according to claim 61 or 62, wherein the emulator is further used for:
    通过相机标定法对所述视觉传感器进行标定,以确定所述视觉传感器的标定参数。The vision sensor is calibrated by a camera calibration method to determine the calibration parameters of the vision sensor.
  64. 根据权利要求59所述的系统,其特征在于,所述显示设备为3D显示设备,所述3D显示设备通过3D投影方式显示所述至少两个场景图像。The system according to claim 59, wherein the display device is a 3D display device, and the 3D display device displays the at least two scene images by means of 3D projection.
  65. 根据权利要求59所述的系统,其特征在于,还包括:光学系统;The system of claim 59, further comprising: an optical system;
    所述光学系统设置在所述显示设备与所述视觉传感器之间,所述光学系统用于对所述显示设备输出的场景图像进行光学转换,使得转换后的场景图像匹配所述视觉传感器的视场角。The optical system is arranged between the display device and the visual sensor, and the optical system is used to perform optical conversion on the scene image output by the display device, so that the converted scene image matches the visual field of the visual sensor. field angle.
  66. 根据权利要求52所述的系统,其特征在于,所述仿真器确定的场景 图像为单一场景图像,所述图像输出设备具体用于根据所述单一场景图像输出所述至少两个场景图像。The system according to claim 52, wherein the scene image determined by the simulator is a single scene image, and the image output device is specifically configured to output the at least two scene images according to the single scene image.
  67. 根据权利要求52-62任一项所述的系统,其特征在于,所述仿真器还用于:The system according to any one of claims 52-62, wherein the emulator is further used for:
    根据所述控制信号以及虚拟的场景模型,确定对应的传感信息;Determine corresponding sensing information according to the control signal and the virtual scene model;
    输出所述传感信息,以使所述控制系统根据所述至少两个场景图像以及所述传感信息确定对应的控制信号。The sensing information is output, so that the control system determines a corresponding control signal according to the at least two scene images and the sensing information.
  68. 根据权利要求67所述的系统,其特征在于,所述传感信息包括所述场景元素对应的点云数据;所述仿真器在根据所述控制信号以及虚拟的场景模型,确定对应的传感信息时,具体用于:The system according to claim 67, wherein the sensing information includes point cloud data corresponding to the scene elements; and the simulator determines the corresponding sensing information according to the control signal and the virtual scene model. information, specifically for:
    根据所述控制信号以及虚拟的场景模型,确定所述可移动平台与所述场景元素之间的相对位姿;determining the relative pose between the movable platform and the scene element according to the control signal and the virtual scene model;
    根据所述可移动平台与所述场景元素之间的相对位姿确定点云数据。Point cloud data is determined according to the relative pose between the movable platform and the scene element.
  69. 根据权利要求52-62任一项所述的系统,其特征在于,所述仿真器还用于:The system according to any one of claims 52-62, wherein the emulator is further used for:
    根据所述控制信号,确定所述可移动平台的运行状态;According to the control signal, determine the running state of the movable platform;
    根据所述可移动平台的运行状态,对所述控制系统进行评价。The control system is evaluated according to the operating state of the movable platform.
  70. 根据权利要求52-62任一项所述的系统,其特征在于,所述控制系统为应用于车辆的驾驶控制系统,所述可移动平台为车辆。The system according to any one of claims 52-62, wherein the control system is a driving control system applied to a vehicle, and the movable platform is a vehicle.
  71. 一种仿真器,其特征在于,包括:存储器和至少一个处理器;An emulator, comprising: a memory and at least one processor;
    所述存储器存储计算机执行指令;the memory stores computer-executable instructions;
    所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如权利要求1-32任一项所述的方法。The at least one processor executes computer-implemented instructions stored in the memory, causing the at least one processor to perform the method of any of claims 1-32.
  72. 一种计算机可读存储介质,其特征在于,所述可读存储介质上存储有计算机程序;所述计算机程序在被执行时,实现如权利要求1-32任一项所述的方法。A computer-readable storage medium, characterized in that a computer program is stored on the readable storage medium; when the computer program is executed, the method according to any one of claims 1-32 is implemented.
  73. 一种计算机程序产品,包括计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1-32任一项所述的方法。A computer program product, comprising a computer program, characterized in that, when the computer program is executed by a processor, the method described in any one of claims 1-32 is implemented.
PCT/CN2020/141798 2020-12-30 2020-12-30 Simulation test method and system, simulator, storage medium, and program product WO2022141294A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080081655.XA CN114846515A (en) 2020-12-30 2020-12-30 Simulation test method, simulation test system, simulator, storage medium, and program product
PCT/CN2020/141798 WO2022141294A1 (en) 2020-12-30 2020-12-30 Simulation test method and system, simulator, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/141798 WO2022141294A1 (en) 2020-12-30 2020-12-30 Simulation test method and system, simulator, storage medium, and program product

Publications (1)

Publication Number Publication Date
WO2022141294A1 true WO2022141294A1 (en) 2022-07-07

Family

ID=82260077

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141798 WO2022141294A1 (en) 2020-12-30 2020-12-30 Simulation test method and system, simulator, storage medium, and program product

Country Status (2)

Country Link
CN (1) CN114846515A (en)
WO (1) WO2022141294A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562076A (en) * 2022-11-29 2023-01-03 北京路凯智行科技有限公司 Simulation system, method and storage medium for unmanned mine car
CN116046417A (en) * 2023-04-03 2023-05-02 西安深信科创信息技术有限公司 Automatic driving perception limitation testing method and device, electronic equipment and storage medium
CN116467859A (en) * 2023-03-30 2023-07-21 昆易电子科技(上海)有限公司 Data processing method, system, device and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937421B (en) * 2022-12-13 2024-04-02 昆易电子科技(上海)有限公司 Method for generating simulated video data, image generating device and readable storage medium
CN116990746B (en) * 2023-09-20 2024-01-30 武汉能钠智能装备技术股份有限公司 Direction finding system and method for radio monitoring

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102390370A (en) * 2011-10-25 2012-03-28 河海大学 Stereoscopic vision based emergency treatment device and method for running vehicles
US20180286268A1 (en) * 2017-03-28 2018-10-04 Wichita State University Virtual reality driver training and assessment system
CN110062916A (en) * 2017-04-11 2019-07-26 深圳市大疆创新科技有限公司 For simulating the visual simulation system of the operation of moveable platform
CN110779730A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 L3-level automatic driving system testing method based on virtual driving scene vehicle on-ring
CN111316324A (en) * 2019-03-30 2020-06-19 深圳市大疆创新科技有限公司 Automatic driving simulation system, method, equipment and storage medium
CN111399480A (en) * 2020-03-30 2020-07-10 上海汽车集团股份有限公司 Hardware-in-loop test system of intelligent driving controller
CN111752261A (en) * 2020-07-14 2020-10-09 同济大学 Automatic driving test platform based on autonomous driving robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102390370A (en) * 2011-10-25 2012-03-28 河海大学 Stereoscopic vision based emergency treatment device and method for running vehicles
US20180286268A1 (en) * 2017-03-28 2018-10-04 Wichita State University Virtual reality driver training and assessment system
CN110062916A (en) * 2017-04-11 2019-07-26 深圳市大疆创新科技有限公司 For simulating the visual simulation system of the operation of moveable platform
CN111316324A (en) * 2019-03-30 2020-06-19 深圳市大疆创新科技有限公司 Automatic driving simulation system, method, equipment and storage medium
CN110779730A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 L3-level automatic driving system testing method based on virtual driving scene vehicle on-ring
CN111399480A (en) * 2020-03-30 2020-07-10 上海汽车集团股份有限公司 Hardware-in-loop test system of intelligent driving controller
CN111752261A (en) * 2020-07-14 2020-10-09 同济大学 Automatic driving test platform based on autonomous driving robot

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562076A (en) * 2022-11-29 2023-01-03 北京路凯智行科技有限公司 Simulation system, method and storage medium for unmanned mine car
CN115562076B (en) * 2022-11-29 2023-03-14 北京路凯智行科技有限公司 Simulation system, method and storage medium for unmanned mine car
CN116467859A (en) * 2023-03-30 2023-07-21 昆易电子科技(上海)有限公司 Data processing method, system, device and storage medium
CN116467859B (en) * 2023-03-30 2024-05-10 昆易电子科技(上海)有限公司 Data processing method, system, device and computer readable storage medium
CN116046417A (en) * 2023-04-03 2023-05-02 西安深信科创信息技术有限公司 Automatic driving perception limitation testing method and device, electronic equipment and storage medium
CN116046417B (en) * 2023-04-03 2023-11-24 安徽深信科创信息技术有限公司 Automatic driving perception limitation testing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114846515A (en) 2022-08-02

Similar Documents

Publication Publication Date Title
WO2022141294A1 (en) Simulation test method and system, simulator, storage medium, and program product
TWI703064B (en) Systems and methods for positioning vehicles under poor lighting conditions
CN109144095B (en) Embedded stereoscopic vision-based obstacle avoidance system for unmanned aerial vehicle
JP6548691B2 (en) Image generation system, program and method, simulation system, program and method
US10630962B2 (en) Systems and methods for object location
WO2020168668A1 (en) Slam mapping method and system for vehicle
CN110244282B (en) Multi-camera system and laser radar combined system and combined calibration method thereof
CN111028155B (en) Parallax image splicing method based on multiple pairs of binocular cameras
Li et al. Easy calibration of a blind-spot-free fisheye camera system using a scene of a parking space
US20120155744A1 (en) Image generation method
US20200012756A1 (en) Vision simulation system for simulating operations of a movable platform
WO2018066352A1 (en) Image generation system, program and method, and simulation system, program and method
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
JP7479729B2 (en) Three-dimensional representation method and device
WO2020199057A1 (en) Self-piloting simulation system, method and device, and storage medium
CN108564654B (en) Picture entering mode of three-dimensional large scene
CN111914790B (en) Real-time human body rotation angle identification method based on double cameras under different scenes
WO2023283929A1 (en) Method and apparatus for calibrating external parameters of binocular camera
CN115218919A (en) Optimization method and system of air track line and display
WO2021093804A1 (en) Omnidirectional stereo vision camera configuration system and camera configuration method
US20230005213A1 (en) Imaging apparatus, imaging method, and program
KR102298047B1 (en) Method of recording digital contents and generating 3D images and apparatus using the same
CN114663599A (en) Human body surface reconstruction method and system based on multiple views
Chen et al. A structured-light-based panoramic depth camera
TWI725620B (en) Omnidirectional stereo vision camera configuration system and camera configuration method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20967633

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20967633

Country of ref document: EP

Kind code of ref document: A1