WO2023028858A1 - 一种测试方法和系统 - Google Patents

一种测试方法和系统 Download PDF

Info

Publication number
WO2023028858A1
WO2023028858A1 PCT/CN2021/115752 CN2021115752W WO2023028858A1 WO 2023028858 A1 WO2023028858 A1 WO 2023028858A1 CN 2021115752 W CN2021115752 W CN 2021115752W WO 2023028858 A1 WO2023028858 A1 WO 2023028858A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
vehicle
simulation
scene
test
Prior art date
Application number
PCT/CN2021/115752
Other languages
English (en)
French (fr)
Inventor
陈灿平
常陈陈
卢远志
陈保成
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/115752 priority Critical patent/WO2023028858A1/zh
Priority to CN202180003497.0A priority patent/CN113892088A/zh
Priority to EP21955420.1A priority patent/EP4379558A1/en
Publication of WO2023028858A1 publication Critical patent/WO2023028858A1/zh
Priority to US18/590,616 priority patent/US20240202401A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Definitions

  • the present application relates to the field of automobiles, in particular to a testing method and system.
  • Highly intelligent automatic driving system is the development direction of future automatic driving technology, and the test of intelligent driving system is a key part in the development process of intelligent driving system.
  • Software in the loop (software in loop, SIL), model in the loop (module in loop, MIL), hardware in the loop (hardware in loop, HIL) and other technical means can be used to accelerate the algorithm iteration and test verification of the automatic driving system.
  • SIL simulation test is convenient and flexible, has strong repeatability, low test cost and good safety.
  • HIL rigs are expensive and difficult to take into account real vehicle dynamics.
  • the vehicle in the loop (vehicle in loop, VIL) test adopts the real vehicle in the loop, which makes up for the inaccuracy of the SIL and HIL simulation vehicle dynamics models.
  • road testing is the main testing method for advanced intelligent driving systems. Highly dangerous extreme scenarios (corner/edge cases) will be encountered during the road test, and such scenarios are difficult to reproduce in the road test. It is of great significance for the algorithm development, iteration and verification of the intelligent driving system to reconstruct such key scenarios and conduct vehicle-in-the-loop testing quickly, efficiently and at low cost.
  • iteration and verification of the intelligent driving system to reconstruct such key scenarios and conduct vehicle-in-the-loop testing quickly, efficiently and at low cost.
  • there is no test method for this kind of reconstruction scenario in the vehicle-in-the-loop test and there is no method for ensuring the consistency of VIL test results.
  • the present application proposes a flexible and convenient test system with low test cost and high reliability that can ensure vehicle-in-the-loop test.
  • the embodiment of the present application provides a vehicle testing method.
  • the test method provided in the first possible implementation manner of the first aspect of the present application includes: obtaining first location information, the first location information is the location information of the vehicle; obtaining second location information according to the first location information, the second location information is the position information of the vehicle in the simulation scene.
  • obtaining the second location information according to the first location information specifically includes: obtaining a location conversion relationship according to the first location information and simulation scene information; The second location information is acquired according to the first location information and the location conversion relationship.
  • the position limitation of the simulation scene is decoupled, the test is flexible and convenient, the test efficiency is improved, the VIL test content is enriched, and the test cost is reduced.
  • the method further includes: updating the simulation scene information according to the second location information, simulating Scene information includes traffic participant information; send simulation scene information.
  • the simulation scenario information further includes scenario trigger information.
  • the scene trigger information is used to switch the test phase, and the test phase includes the first phase and the second phase; wherein, the vehicle uses The first planning control algorithm, the vehicle uses the second planning control algorithm in the second stage.
  • the method also includes: entering the second stage from the first stage according to the scene trigger information.
  • the method provided by this application reduces the influence of human factors in the VIL test process by using the first regulation and control algorithm in the simulation initialization process and the second regulation and control algorithm based on scene trigger information.
  • the scene trigger information includes at least one of the following: vehicle trigger information, traffic participant trigger information, and traffic signal trigger information.
  • the vehicle trigger information includes a set vehicle motion state
  • the vehicle motion state includes at least one of the following: vehicle position information, vehicle motion information, driving task information, and the status of ego vehicle subsystems or components.
  • the traffic participant trigger information includes at least one of the following: a set motion state, a set position, The time of occurrence in the scene.
  • the traffic signal trigger information includes at least one of the following: a set traffic signal light signal, a set Traffic sign signals, set traffic marking signals, set traffic manager signals.
  • the simulation scenario is obtained according to the simulation scenario library, and the simulation scenario library includes a plurality of simulation scenarios, multiple The simulation scenarios include simulation scenarios obtained from road test data.
  • the second aspect of the present application provides a vehicle testing system.
  • the system includes: an acquisition module, the acquisition module is used to acquire the first location information, the first location information is the Position information; a position conversion module, the position conversion module is used to obtain second position information according to the first position information, and the second position information is the position information of the vehicle in the simulation scene.
  • the location conversion module is configured to acquire the second location information according to the first location information, specifically including: the location conversion module is configured to obtain the second location information according to the first location information The information and the simulation scene information obtain a position conversion relationship; the position conversion module is used to obtain the second position information according to the first position information and the position conversion relationship.
  • the third possible implementation manner it further includes a simulation module and a sending module, after the location conversion module acquires the second location information according to the first location information, the simulation module is used to The simulation scene information is updated according to the second position information, and the simulation scene information includes traffic participant information; the sending module is used for sending the simulation scene information.
  • the simulation scenario information further includes scenario trigger information.
  • the scene trigger information is used to switch the test phase, and the test phase includes the first phase and the second phase; wherein, the vehicle uses The first planning control algorithm, the vehicle uses the second planning control algorithm in the second stage.
  • the system also includes: a simulation trigger module, which is used to control the vehicle to enter the second stage from the first stage according to the scene trigger information.
  • the regulation control algorithm and algorithm switching module of the VIL test initialization process designed in this application can reduce the interference of human factors during the VIL test, ensure the consistency of the self-vehicle state when the scene is triggered, and greatly improve the repeatability and stability of multiple tests.
  • the scene trigger information includes at least one of the following: vehicle trigger information, traffic participant trigger information, and traffic signal trigger information.
  • the vehicle trigger information includes a set vehicle motion state
  • the vehicle motion state includes at least one of the following: vehicle position information, vehicle motion information, driving task information, and the status of ego vehicle subsystems or components.
  • the traffic participant trigger information includes at least one of the following: a set motion state, a set position, The time of occurrence in the scene.
  • the traffic signal trigger information includes at least one of the following: a set traffic light signal, a set Traffic sign signals, set traffic marking signals, set traffic manager signals.
  • This application uses the initialization process of the simulation scene and the scene simulation process, as well as the scene trigger information and the trigger information of the traffic participants, to reduce the impact of the vehicle model on the scene interaction results, and to use a variety of trigger information to ensure the consistency of the scene simulation. , to improve the diversity of simulation scenarios;
  • the simulation scenario is obtained according to the simulation scenario library, the simulation scenario library includes a plurality of simulation scenarios, and the plurality of simulation scenarios include road test data The obtained simulation scene.
  • the third aspect of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores codes or instructions, and when the codes or instructions are executed, any one of items 1 to 10 of the first aspect is executed Methods.
  • the fourth aspect of the present application also provides a vehicle.
  • the vehicle can obtain simulation trigger information, and switch the scene trigger information from the first regulatory algorithm to the second according to the simulation trigger information.
  • Two regulation and control algorithms the first regulation and control algorithm is used to make the vehicle state reach the preset condition, and the second regulation and control algorithm is used for simulation test.
  • test method provided by this application can be applied to complex scenarios with many traffic participants, and can take into account the complex situations of multiple interactions and multiple types of participants encountered by advanced automatic driving.
  • the iteration, update, and verification of the intelligent driving system must ensure the repeatability of the same scene in order to be able to evaluate and prove the system's capabilities.
  • the test method provided in this application can ensure the consistency of multiple test results and reduce the test cost.
  • projecting the test vehicle into the simulation scene through the position conversion module can also realize the vehicle-in-the-loop test with flexible and convenient site adjustment and good repeatability of results.
  • FIG. 1 is a functional schematic diagram of a vehicle provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a vehicle architecture provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a simulation system architecture provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a self-vehicle information extraction process provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of a position conversion principle provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of an update of a positioning conversion relationship provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a scenario test flow provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a test process provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of a test initialization process provided by an embodiment of the present application.
  • FIG. 10 is a curve diagram of multiple test speeds of a drive test extraction scene provided by an embodiment of the present application.
  • FIG. 11 is a curve diagram of multiple test speeds provided by the embodiment of the present application.
  • Fig. 1 is a functional schematic diagram of a vehicle 100 provided by the embodiment of the present application.
  • Vehicle 100 may include various subsystems such as infotainment system 110 , perception system 120 , decision control system 130 , drive system 140 , and computing platform 150 .
  • vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components.
  • each subsystem and component of the vehicle 100 may be interconnected in a wired or wireless manner.
  • the infotainment system 110 may include a communication system 111 , an entertainment system 112 and a navigation system 113 .
  • Communication system 111 may include a wireless communication system that may wirelessly communicate with one or more devices, either directly or via a communication network.
  • the wireless communication system can use the third generation (3th generation, 3G) cellular communication technology, such as code division multiple access (code division multiple access, CDMA), or the fourth generation (4th generation, 4G) cellular communication technology, such as long-term Evolution (long time evolution, LTE) communication technology.
  • the fifth generation (5th generation, 5G) cellular communication technology such as new radio (new radio, NR) communication technology.
  • the wireless communication system may communicate with a wireless local area network (wireless local area network, WLAN) by using WiFi.
  • the wireless communication system may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols, such as various vehicle communication systems, for example, a wireless communication system may include one or more dedicated short range communications (DSRC) devices, which may include communication between vehicles and/or roadside stations Public and/or Private Data Communications.
  • DSRC dedicated short range communications
  • the entertainment system 112 can include a central control screen, a microphone and a sound system. Users can listen to the radio and play music in the car based on the entertainment system; Touch type, users can operate by touching the screen. In some cases, the user's voice signal can be acquired through the microphone, and the user can control the vehicle 100 based on the analysis of the user's voice signal, such as adjusting the temperature inside the vehicle. In other cases, music may be played to the user via a speaker.
  • the navigation system 113 may include a map service provided by a map provider, so as to provide navigation for the driving route of the vehicle 100 , and the navigation system 113 may cooperate with the global positioning system 121 and the inertial measurement unit 122 of the vehicle.
  • the map service provided by the map provider can be a two-dimensional map or a high-definition map.
  • the perception system 120 may include several kinds of sensors that sense information about the environment around the vehicle 100 .
  • the perception system 120 may include a global positioning system 121 (the global positioning system may be a global positioning satellite (global position satellite, GPS) system, or the Beidou system or other positioning systems), an inertial measurement unit (inertial measurement unit, IMU) 122 , laser radar 123 , millimeter wave radar 124 , ultrasonic radar 125 and camera device 126 .
  • the perception system 120 may also include sensors of the interior systems of the monitored vehicle 100 (eg, interior air quality monitors, fuel gauges, oil temperature gauges, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding properties (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function for safe operation of the vehicle 100 .
  • the positioning system 121 may be used to estimate the geographic location of the vehicle 100 .
  • the inertial measurement unit 122 is used to sense the position and orientation changes of the vehicle 100 based on inertial acceleration.
  • inertial measurement unit 122 may be a combination accelerometer and gyroscope.
  • the lidar 123 may utilize laser light to sense objects in the environment in which the vehicle 100 is located.
  • lidar 123 may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
  • the millimeter wave radar 124 may utilize radio signals to sense objects within the surrounding environment of the vehicle 100 .
  • radar 126 may be used to sense the velocity and/or heading of objects.
  • the ultrasonic radar 125 may sense objects around the vehicle 100 using ultrasonic signals.
  • the camera device 126 can be used to capture image information of the surrounding environment of the vehicle 100 .
  • the camera device 126 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, etc., and the image information acquired by the camera device 126 may include still images or video stream information.
  • the decision-making control system 130 includes a computing system 131 for analyzing and making decisions based on the information acquired by the perception system 120.
  • the decision-making control system 130 also includes a vehicle controller 132 for controlling the power system of the vehicle 100, and for controlling the steering of the vehicle 100.
  • System 133 accelerator pedal 134 (including the accelerator pedal of an electric vehicle or the gas pedal of a fuel vehicle, which is an exemplary name here) and a braking system 135 .
  • Computing system 131 is operable to process and analyze various information acquired by perception system 120 in order to identify objects, objects, and/or features in the environment surrounding vehicle 100 .
  • the objects may include pedestrians or animals, and the objects and/or features may include traffic signals, road boundaries, and obstacles.
  • the computing system 131 may use technologies such as object recognition algorithms, structure from motion (SFM) algorithms, and video tracking. In some embodiments, computing system 131 may be used to map the environment, track objects, estimate the velocity of objects, and the like.
  • the computing system 131 can analyze various information obtained and obtain a control strategy for the vehicle.
  • the vehicle controller 132 can be used to coordinate and control the power battery and the driver 141 of the vehicle, so as to improve the power performance of the vehicle 100 .
  • the steering system 133 is operable to adjust the heading of the vehicle 100 .
  • the accelerator pedal 134 is used to control the operating speed of the driver 141 and thereby control the speed of the vehicle 100 .
  • the braking system 135 is used to control deceleration of the vehicle 100 .
  • Braking system 135 may use friction to slow wheels 144 .
  • braking system 135 may convert kinetic energy of wheels 144 into electrical current.
  • the braking system 135 may also take other forms to slow the wheels 144 to control the speed of the vehicle 100 .
  • Drive system 140 may include components that provide powered motion to vehicle 100 .
  • drive system 140 may include driver 141 , energy source 142 , transmission 143 and wheels 144 .
  • the driver 141 may be an internal combustion engine, an electric motor, an air compression engine or other types of engine combinations, such as a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
  • the drive 141 converts the energy source 142 into mechanical energy.
  • Examples of energy source 142 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power.
  • the energy source 142 may also provide energy to other systems of the vehicle 100 .
  • Transmission 143 may transmit mechanical power from driver 141 to wheels 144 .
  • Transmission 143 may include a gearbox, a differential, and a drive shaft.
  • the transmission device 143 may also include other devices, such as clutches.
  • drive shafts may include one or more axles that may be coupled to one or more wheels 121 .
  • Computing platform 150 may include at least one processor 151 that may execute instructions 153 stored in a non-transitory computer-readable medium such as memory 152 .
  • computing platform 150 may also be a plurality of computing devices that control individual components or subsystems of vehicle 100 in a distributed manner.
  • the processor 151 may be any conventional processor, such as a central processing unit (central process unit, CPU). Alternatively, the processor 151 may also include, for example, an image processor (graphic process unit, GPU), a field programmable gate array (field programmable gate array, FPGA), a system on chip (system on chip, SOC), an application specific integrated chip ( application specific integrated circuit, ASIC) or their combination.
  • FIG. 1 functionally illustrates the processor, memory, and other elements of computer 110 in the same block, those of ordinary skill in the art will understand that the processor, computer, or memory may actually include Multiple processors, computers, or memories stored within the same physical enclosure.
  • the memory may be a hard drive or other storage medium located in a different housing than the computer 110 .
  • references to a processor or computer are to be understood to include references to collections of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components, may each have their own processor that only performs calculations related to component-specific functions .
  • the processor may be located remotely from the vehicle and be in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle while others are executed by a remote processor, including taking the necessary steps to perform a single maneuver.
  • memory 152 may contain instructions 153 (eg, program logic) executable by processor 151 to perform various functions of vehicle 100 .
  • Memory 152 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or controlling one or more of infotainment system 110 , perception system 120 , decision control system 130 , drive system 140 instructions.
  • memory 152 may also store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by vehicle 100 and computing platform 150 during operation of vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • Computing platform 150 may control functions of vehicle 100 based on input received from various subsystems (eg, drive system 140 , perception system 120 , and decision-making control system 130 ). For example, computing platform 150 may utilize input from decision control system 130 in order to control steering system 133 to avoid obstacles detected by perception system 120 . In some embodiments, computing platform 150 is operable to provide control over many aspects of vehicle 100 and its subsystems.
  • various subsystems eg, drive system 140 , perception system 120 , and decision-making control system 130 .
  • computing platform 150 may utilize input from decision control system 130 in order to control steering system 133 to avoid obstacles detected by perception system 120 .
  • computing platform 150 is operable to provide control over many aspects of vehicle 100 and its subsystems.
  • one or more of these components described above may be installed separately from or associated with the vehicle 100 .
  • memory 152 may exist partially or completely separate from vehicle 100 .
  • the components described above may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as limiting the embodiment of the present application.
  • vehicle 100 may be configured in a fully or partially autonomous driving mode.
  • the vehicle 100 can obtain its surrounding environment information through the perception system 120, and obtain an automatic driving strategy based on the analysis of the surrounding environment information to realize fully automatic driving, or present the analysis results to the user to realize partially automatic driving.
  • An autonomous vehicle traveling on a road can identify objects within its surroundings to determine adjustments to the current speed.
  • the objects may be other vehicles, traffic control devices, or other types of objects.
  • each identified object may be considered independently and based on the object's respective characteristics, such as its current speed, acceleration, distance to the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to adjust.
  • the vehicle 100 or a sensing and computing device (e.g., computing system 131, computing platform 150) associated with the vehicle 100 may be based on the identified characteristics of the object and the state of the surrounding environment (e.g., traffic, rain, traffic on the road) ice, etc.) to predict the behavior of the identified objects.
  • each identified object is dependent on the behavior of the other, so all identified objects can also be considered together to predict the behavior of a single identified object.
  • the vehicle 100 is able to adjust its speed based on the predicted behavior of the identified object.
  • the self-driving car can determine which state the vehicle will need to adjust to (eg, accelerate, decelerate, or stop) based on the predicted behavior of the object.
  • other factors may also be considered to determine the speed of the vehicle 100 , such as the lateral position of the vehicle 100 in the traveling road, the curvature of the road, the proximity of static and dynamic objects, and the like.
  • the computing device may also provide instructions to modify the steering angle of the vehicle 100 such that the self-driving car follows a given trajectory and/or maintains contact with objects in the vicinity of the self-driving car (e.g., , the safe lateral and longitudinal distances of cars in adjacent lanes on the road.
  • objects in the vicinity of the self-driving car e.g., , the safe lateral and longitudinal distances of cars in adjacent lanes on the road.
  • the above-mentioned vehicle 100 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train, etc., the embodiment of the present application There is no particular limitation.
  • the vehicle 100 includes a plurality of vehicle integration units (vehicle integration unit, VIU) 11, a communication box (telematic box, T-BOX) 12, a cockpit domain controller (cockpit domain controller, CDC), a mobile data center (mobile data center, MDC) 14, vehicle domain controller (VDC) 15.
  • vehicle integration unit vehicle integration unit 11, a communication box (telematic box, T-BOX) 12, a cockpit domain controller (cockpit domain controller, CDC), a mobile data center (mobile data center, MDC) 14, vehicle domain controller (VDC) 15.
  • the vehicle 100 also includes various types of sensors arranged on the vehicle body, including: laser radar 21 , millimeter wave radar 22 , ultrasonic radar 23 , and camera device 24 . Each type of sensor can include multiples. It should be understood that although FIG. 2 shows the location layout of different sensors on the vehicle 100, the number and location layout of the sensors in FIG. Type, quantity and location layout.
  • VIUs Four VIUs are shown in FIG. 2 . It should be understood that the number and positions of VIUs in FIG. 2 are only an example, and those skilled in the art can select an appropriate number and positions of VIUs according to actual needs.
  • the vehicle integration unit VIU 11 provides some or all of the data processing functions or control functions required by the vehicle components for multiple vehicle components. VIU can have one or more of the following functions.
  • Electronic control function that is, the VIU is used to realize the electronic control function provided by the electronic control unit (ECU) inside some or all of the vehicle components.
  • ECU electronice control unit
  • the control function required by a certain vehicle component and for example, the data processing function required by a certain vehicle component.
  • the same functions as the gateway, that is, the VIU can also have some or all of the same functions as the gateway, for example, the function of protocol conversion, protocol encapsulation and forwarding, and data format conversion.
  • the data involved in the above functions may include the operating data of the actuators in the vehicle components, for example, the motion parameters of the actuators, the working status of the actuators, etc.
  • the data involved in the above functions can also be the data collected by the data collection unit (for example, sensitive element) of the vehicle components, for example, the road information of the road on which the vehicle is driven, or the weather information, etc. collected by the sensitive element of the vehicle, This embodiment of the present application does not specifically limit it.
  • the vehicle 100 can be divided into multiple domains (domain), each domain has an independent domain controller (domain controller), specifically, in Fig. 2, two kinds of domains are shown Controller: cockpit domain controller CDC 13 and vehicle domain controller VDC 15.
  • the cockpit domain controller CDC 13 can be used to realize the functional control of the cockpit area of the vehicle 100, and the vehicle components in the cockpit area can include a head up display (head up display, HUD), instrument panel, radio, central control screen, navigation, camera, etc.
  • a head up display head up display, HUD
  • instrument panel instrument panel
  • radio central control screen
  • navigation camera, etc.
  • the vehicle domain controller VDC 15 can be used to coordinate and control the power battery and driver 141 of the vehicle to improve the power performance of the vehicle 100.
  • the vehicle controller 132 in FIG. 1 can realize various functions of the VDC. Function.
  • the T-BOX 12 can be used to realize the communication connection between the vehicle 100 and the internal and external equipment of the vehicle.
  • the T-BOX can obtain in-vehicle device data through the bus of the vehicle 100, and can also communicate with the user's mobile phone through a wireless network.
  • the T-BOX 12 can be included in the communication system 111 of FIG. 1 .
  • the mobile data center MDC 13 is used to output drive, transmission, steering and braking execution control commands based on core control algorithms such as environment perception and positioning, intelligent planning and decision-making, and vehicle motion control, so as to realize automatic control of the vehicle 100.
  • the interactive interface realizes the human-computer interaction of vehicle driving information.
  • computing platform 150 in FIG. 1 may implement various functions of MDC 13.
  • VIUs 11 in Fig. 2 form a ring topology connection network
  • each VIU 11 communicates with the sensor in its immediate vicinity
  • T-BOX 12, CDC 13, MDC 14 and VDC 15 communicate with the ring topology connection network of VIUs.
  • VIU 11 can acquire information from various sensors, and report the acquired information to CDC 13, MDC 14 and VDC 15.
  • T-BOX 12, CDC 13, MDC 14, and VDC 15 can also communicate with each other.
  • connection between VIU can adopt such as Ethernet (ethernet), and the connection of VIU and T-BOX 12, CDC 13, MDC 14 and VDC 15 can adopt such as Ethernet or fast peripheral component interconnection (peripheral component interconnect express, PCIe) Technology, the connection between VIU and sensor can adopt such as controller area network (controller area network, CAN), local interconnection network (local interconnect network, LIN), FlexRay, media oriented system transport (media oriented system transport, MOST )wait.
  • controller area network controller area network, CAN
  • local interconnection network local interconnect network, LIN
  • FlexRay media oriented system transport
  • MOST media oriented system transport
  • the embodiment of the present application provides a simulation test system, which is based on a scenario trigger mechanism, has high reliability, and can realize vehicle-in-the-loop testing, and can be applied to the vehicle 100 shown in FIG. 1 or FIG. 2 in the simulation test.
  • the simulation test system may include an intelligent driving central processing unit, a vehicle positioning and sensing device unit, a data platform and a simulation platform.
  • the intelligent driving central processing unit can be an automatic driving system, it can be a computing platform as shown in Figure 1, it can also be an MDC platform as shown in Figure 2, or it can be another system responsible for automatic driving or Domain controller for intelligent driver assistance computing. This application does not limit this.
  • the positioning and sensing device unit in the simulation test system provided in the embodiment of the present application may be a sensor or device in the sensing system 120 as shown in FIG. 1 .
  • the data platform and the simulation platform can be servers provided by the test site, or a cloud server whose geographical location is decoupled from the test site.
  • the cloud server can be an actual server or a virtual server. This application does not limit this.
  • FIG. 3 is a schematic diagram of a simulation system architecture provided by an embodiment of the present application.
  • the simulation system architecture can include several aspects such as test scene library, simulation system, automatic driving system (ADS-Auto Driving System), and automatic driving vehicles.
  • ADS-Auto Driving System automatic driving system
  • the test method provided by the embodiment of the present application can be applied to the test of the automatic driving system.
  • the test method provided by the embodiment of the present application can also be applied to other driving assistance systems (Advanced Driving Assistance System), etc.
  • the system is tested. This application does not limit the type of the system to be tested.
  • the scene described in the specification of this application can be understood as a collection of various information and constraints such as the vehicle and the environment around the vehicle, roads, weather, traffic signals, and traffic participants.
  • the scene can be a highway scene, urban road scene, off-road road scene, desert road scene, etc. according to the road type; another example, the scene can be a gas station scene, a shopping center scene, a parking garage scene according to the function or service etc.; in addition, the scene can also be an automatic driving scene, an automatic parking scene, an emergency braking scene, an anti-lock braking scene, an anti-skid scene, etc., which are divided according to driving tasks or vehicle function applications.
  • the simulation scene described in the description of the present application may be a scene presented by a computer, for example, the scene in the real world may be simulated by a computer, or the scene in the real world may be reproduced by collecting scene data in the real world and using a simulation tool. This application is not limited to this.
  • the test scene library includes a variety of scenarios.
  • the scenarios in the test scenario library may be the scenarios obtained through the road test, or the scenarios obtained through other ways.
  • data can be collected through a real vehicle road test, and the road test data can be extracted to form a road test scene, and the scene obtained through the drive test can be added to the test scene library.
  • the test scenario library may also include other publicly available scenarios, such as scenarios provided by a shared platform, or scenarios obtained through other commercially available channels.
  • test scenario library can be stored locally, such as a simulation test system, a workstation, and the like.
  • test scene library can also be saved in the cloud server, the scene library can be updated in real time, and can be shared by multiple simulation test platforms.
  • the simulation scene information described in this application includes scene control information, scene trigger information, and traffic participant information.
  • the scene control information may come from a scene control file.
  • the scene control file may include data of a certain scene to be simulated. When the scene control information is loaded, roads, traffic facilities, obstacles, traffic participants, etc. in the simulation scene may be generated.
  • the scene control information can be understood as the information needed for the computer or simulator to generate the simulation scene, or the environment, roads, traffic facilities, obstacles and other information in the scene.
  • Scenario trigger information can be used in handover planning control algorithms.
  • the planning control algorithm is also referred to as the planning control algorithm in this application specification.
  • the planning control algorithm is used for the planning control of the vehicle, so that the vehicle can drive along the planned control path, or perform actions such as acceleration and braking according to the results of the planning control. .
  • This application does not limit specific driving tasks or actions performed by the vehicle.
  • the scene trigger information provided in the embodiment of the present application may include vehicle trigger information, traffic participant trigger information, traffic signal trigger information, and the like.
  • the own vehicle trigger information may be state information when the own vehicle satisfies a specific condition. It should be noted that the vehicle described in the specification of this application is sometimes also described as the own vehicle, and the two can be understood as having the same meaning unless otherwise specified.
  • the ego vehicle trigger information may be information that one or more aspects such as the location, motion state, and driving task of the self-driving vehicle meet specific conditions.
  • the motion state may include position, velocity, acceleration and so on.
  • the trigger information of the self-vehicle can be that the position of the vehicle reaches point A; another example, the trigger information of the self-vehicle can be that the speed of the vehicle reaches 5m/s; another example, the trigger information of the own vehicle can be that the acceleration of the vehicle reaches 2m/s 2 ; for another example, the trigger information can be that the vehicle arrives at point A and the speed reaches 5m/s; for another example, the trigger information can be that the vehicle arrives at point A and the acceleration reaches 2m/s 2 ; for another example, the trigger information can be that the vehicle arrives at point A, The velocity reaches 5m/s and the acceleration reaches 2m/s 2 .
  • the ego vehicle trigger information can be used for ego vehicle path planning, and for controlling the
  • the trigger information of the own vehicle can also be the state of the subsystems or parts of the own vehicle, for example, it can be the state of the sound and light signal sent by the own vehicle, including the left turn signal indicator, right turn signal indicator, Brake signal indication, emergency or failure signal indication, whistle signal, yield signal, automatic driving status signal, operation status signal, other indication signals, etc.; Thermal management system status, etc.
  • it may also be the state of the vehicle suspension, the state of the braking system, the state of the steering system, and the like.
  • the trigger information of the self-vehicle may be the information that the state of the self-vehicle and its subsystems and components meet specific conditions, which is not limited in this application.
  • the ego vehicle trigger information may also be driving task information, for example, it may be an automatic driving task, for example, it may also be an automatic emergency braking task or other tasks, which is not limited in this application.
  • the traffic participant trigger information is the information that the state of the traffic participant satisfies a specific condition.
  • the traffic participants may include other motor vehicles, non-motor vehicles, pedestrians, animals, etc., and this embodiment of the present application does not limit the traffic participants. Traffic participants may have an impact on the planning, decision-making, and control of the ego vehicle. Traffic participant trigger information can be used to control the timing of traffic participant injection into the simulation scene. When the trigger conditions are met, specific traffic participants can be injected into the simulation scene. Traffic participant trigger information can be used to update the scene. When the trigger condition is met, the information of the simulation scene is updated.
  • the traffic participant information can be the position, speed, acceleration, and combination of these variables of other motor vehicles, bicycles, pedestrians, animals, etc. Let me repeat.
  • the traffic participant trigger information can be the information that the state of subsystems or parts of other motor vehicles meets specific conditions, or that the state of subsystems or parts of other non-motor vehicles meets specific conditions
  • it can be the status of sound and light signals sent by other motor vehicles or non-motor vehicles, including left turn signal indication, right turn signal indication, brake signal indication, emergency or failure signal indication, whistle signal, yield signal, Automatic driving status signal, operation status signal, other indication signals, etc.
  • another example can be the status of car doors and windows, air conditioner status, motor status, battery status, thermal management system status, etc.
  • the trigger information of traffic participants can also be the actions, gestures, voices, expressions, etc. of pedestrians.
  • Traffic signals may include signals of traffic lights, signals of traffic directors or traffic managers, traffic sign signals, traffic marking signals, etc.
  • the signals of the traffic lights may include green lights, yellow lights, red lights, arrow lights, fork lights, warning lights, motor vehicle signal lights, non-motor vehicle signal lights, and the like.
  • Traffic sign signals may include speed limit indication signals, speed limit release indication signals, road guidance signals, lane change indication signals, warning signs, prohibition signs, instruction signs, distance signs, road construction safety signs, etc.
  • Traffic marking signals may include indicating markings, prohibiting markings, warning markings, etc.
  • Traffic manager signals may include stop, go straight, turn left, turn right, wait, change lanes, slow down, pull over, etc.
  • the traffic signal trigger information may be that the state of the traffic signal light changes to indicate passing, or the state of the traffic signal light changes to indicate that passing is prohibited. In another possible implementation manner, the traffic signal trigger information may be that a 60km/h speed limit sign appears, or a 60km/h speed limit sign appears. In another possible implementation manner, the traffic signal trigger information may be that a lane line indicating a merge appears, or a tidal lane sign switches the first lane from a through lane to a left-turn lane. In another possible implementation manner, the traffic signal trigger information may instruct the traffic manager to stop, or instruct the traffic manager to proceed to the first location. The traffic signal trigger information can be used to control the traffic signal in the simulation scene.
  • the positioning conversion module uses the scene control information and the current positioning information of the test vehicle to obtain the conversion relationship between the vehicle's position between the simulation scene and the real scene, And map the ego vehicle to the simulation scene. It can be understood that when the ego vehicle moves in the actual test field, the position of the simulated vehicle mapped by the ego vehicle in the simulation scene will also change according to the position of the ego vehicle on the actual field and the position conversion relationship.
  • the simulation system updates the simulation scene by using the scene trigger information, traffic participant information and the location information projected from the vehicle location.
  • the simulation system sends the simulation scene information to the vehicle in real time, for example, it can be sent to the vehicle's automatic driving system or driving assistance system.
  • the information transmission between the simulation system and the automatic driving system can be completed in a wired or wireless manner.
  • it can be completed through the wireless communication mode that can be used by the communication system as shown in FIG. 1 , which will not be repeated here.
  • the simulation scene information can be transmitted to the vehicle wirelessly.
  • the simulation system may be installed on the vehicle and be connected to the vehicle by wire.
  • the simulation system sends the simulation scene information to the vehicle through a wired connection.
  • the simulation system can also communicate with the cloud, and refresh the simulation scene library in the simulation system through the simulation scene library stored in the cloud.
  • Fig. 4 is a flow chart of the principle of the position conversion module and the update of the conversion relationship provided by the embodiment of the present application.
  • the simulation system determines whether to update the position conversion relationship. If it needs to be updated, the simulation system obtains a new position conversion relationship according to the simulation scene information and the current positioning information.
  • the simulation system can obtain a new position conversion relationship according to scene control information and vehicle positioning information. After obtaining the new position conversion relationship, the simulation system obtains the position information of the simulated vehicle in the simulation scene according to the position information of the vehicle and the new position conversion relationship.
  • the position information of the vehicle may include the position of the vehicle and the posture of the vehicle at this position, for example, it may be the orientation of the vehicle, such as facing east, facing west, or facing east by 5 degrees south, or facing east 3 degrees northerly.
  • the position change and motion state of the vehicle on the actual test site completely correspond to the position and motion relationship of the vehicle projected in the simulation scene.
  • the vehicle moves from the first position to the second position in the actual test site, and the projected position of the vehicle in the simulation scene changes from the third position to the fourth position; and, the vehicle moves from the first position to the fourth position in the simulation scene
  • the change process of the motion state in the process of the second position is completely consistent with the change process of the motion state of the vehicle in the process of moving from the third position to the fourth position in the actual test site.
  • the relative positional relationship between the first position and the second position in the actual test site is completely consistent with the relative positional relationship between the third position and the fourth position in the simulation scene.
  • Fig. 5 is a schematic diagram of the principle of position conversion.
  • the position change and motion state of the vehicle on the actual test site may not completely correspond to the position and motion relationship of the vehicle projected in the simulation scene.
  • the projected change in position of a vehicle in a simulated scene can be different from the change in position of the vehicle on an actual test field.
  • the vehicle moves from the first position shown by the solid line box to the second position shown by the dotted line in the test site; the vehicle moves from the third position shown by the solid line box to the dotted line box in the simulation scene The fourth position shown.
  • the vector of the vehicle moving from the first position to the second position in the actual test site is shown by the solid arrow; the vector of the vehicle moving from the third position to the fourth position in the simulation scene is shown by the dotted arrow .
  • the vector shown by the solid arrow may not be consistent with the vector shown by the dotted arrow, and the position change of the vehicle on the actual test site may not be consistent with the position change in the simulation scene.
  • the position change process of the vehicle in the process of changing from the third position to the fourth position in the simulation scene can also be different from the position change process of the vehicle in the process of changing from the first position to the second position in the test field .
  • Fig. 6 is a schematic diagram of the update principle of the position conversion relationship during multiple simulation scene tests in the field test.
  • the simulation system includes a positioning conversion module, which is used to obtain the position conversion relationship according to the position information of the vehicle and the simulation scene information.
  • the simulation scene information includes scene control information
  • the positioning conversion module can be used to obtain the position conversion relationship according to the vehicle position information and the simulation scene information.
  • position information and scene control information to obtain the position conversion relationship.
  • the simulation system obtains the first position conversion relationship according to the vehicle positioning information and scene control information; at the position shown by the dotted line box, the simulation system obtains the first Two-position conversion relationship.
  • the first position conversion relationship and the second position conversion relationship may be different.
  • the position conversion relationship between the position of the vehicle in the test field and the position of the vehicle in the simulation scene can be the same or different;
  • the position conversion relationship between the positions in the simulation scene may be the same or different.
  • the position conversion relationship can be changed according to the actual situation of the test site. For example, when the test site area is small, by adjusting the position conversion relationship, the vehicle can be completed in a test site with a smaller area. Testing of larger field areas in simulated scenarios.
  • the position conversion module provided by the embodiment of the present application can realize the vehicle-in-the-loop test without site constraints, bringing higher flexibility to the test.
  • For the simulated scene under test control the traffic participants in the scene to move according to the predetermined trajectory; use the scene control information and the positioning information of the test vehicle to establish the positional relationship between the test vehicle and the virtual self-driving vehicle in the simulation system, so that the test is not limited by the scene; Scenario trigger information design scenario initialization process and simulation process, using different regulation and control algorithms in the scenario initialization phase and scenario simulation phase to eliminate human interference and ensure the repeatability of multiple test results.
  • FIG. 7 is a schematic diagram of a simulation test flow provided by an embodiment of the present application.
  • the simulation system obtains the simulation scene after loading the scene control information, and the simulation system uses the self-vehicle positioning information and the scene trigger information to update the status information (position, speed, orientation, etc.) of the traffic participants in the simulation scene. etc.), and send the simulation scene information to the vehicle.
  • the simulation scene information can be sent to the automatic driving system; the sent scene information can be recognized by the automatic driving system, so that the automatic driving system thinks that the vehicle is in the The scene driving in the real world corresponding to the simulation scene.
  • the automatic driving system of the vehicle selects different planning control algorithms according to the trigger state of the scene, and sends the control instruction to the vehicle.
  • the scene triggering state may include that the scene is triggered and the scene is not triggered.
  • the vehicle continuously sends positioning and motion information to the automatic driving system for the calculation of automatic driving or driving assistance functions; and, the vehicle sends the vehicle positioning information to the simulation system, and the simulation system updates the scene according to the vehicle positioning information and detects whether The scene trigger condition is met. .
  • the scene information used in this application includes information about the vehicle in the scene, scene trigger information, and traffic participant information.
  • the vehicle-related information is used to update the positioning conversion relationship
  • the scene trigger information is used to switch the planning control algorithm
  • the traffic participant information is used to update the scene.
  • the planning control algorithm in this application is also referred to as the regulatory control algorithm, and the meanings of the two are the same.
  • regulation and control algorithm A is used in the scene test stage to test the automatic driving system; regulation and control algorithm B is used in the scene initialization stage to make the self-vehicle reach the scene trigger target state.
  • the automatic driving system uses the planning control algorithm B to realize the initialization process.
  • the planning control algorithm B can be used to make the vehicle reach the motion state or positioning required by the scene trigger condition.
  • the automatic driving system adopts the regulation and control algorithm A to enter the scene simulation test process.
  • the planning control algorithm A may be an algorithm to be simulated and tested. The simulation of the scene can be completed by looping the above simulation process.
  • the scene trigger information mentioned in this application may be the position, speed, acceleration and the combination of these variables of the self-driving vehicle. Reference may be made to the foregoing, and details are not repeated here.
  • FIG. 8 is a schematic diagram of a process of a simulation test method provided by an embodiment of the present application.
  • the testing method provided by the present application may include the following processes: a process of loading a simulation scene, an initialization process, and a process of scene simulation.
  • processes are also sometimes referred to as stages. Details are given below.
  • the loading of the scene can be realized by the simulation system obtaining the simulation scene according to the scene control information. Then, the initialization process is carried out, and the regulation and control algorithm B is used to make the vehicle meet the trigger condition in the simulation scene. When the vehicle reaches the trigger condition in the simulation scene, it enters the scene simulation process. Afterwards, during the scene simulation process, the automatic driving system executes the regulation and control algorithm A.
  • the scenario simulation process includes multiple processes, different regulation algorithms may be used.
  • the scenario simulation process may also include a first test target, a second test target, a third test target, and the like.
  • the regulation algorithm under each target can also be different, for example, test with regulation algorithm C under the first test target, and test with regulation algorithm D under the second test target.
  • the test is carried out with the regulation algorithm E under the third test object.
  • FIG. 9 is a schematic diagram of an initialization process provided by an embodiment of the present application.
  • the abscissa represents time
  • the ordinate represents speed
  • the scenario triggering condition may be that the speed of the vehicle in the simulation scenario satisfies a certain condition.
  • the vehicle starts to accelerate from a standstill, and the speed accelerates from 0 to the initial trigger point shown in "init Tri Pt" in the figure according to a certain curve, and decelerates according to a certain curve to the scene trigger point that satisfies the scene trigger conditions. Shown as "scen Tri Pt" in Figure 9. It should be noted that during the initialization process, the vehicle uses the regulatory algorithm B to make the vehicle meet the trigger condition.
  • the regulation control algorithm B can be a control algorithm that accelerates the vehicle with a constant acceleration, for example, the vehicle can be accelerated at an acceleration of 2m /s until the vehicle reaches a speed of 10m/s speed.
  • the regulatory algorithm B can also be an acceleration control algorithm for acceleration changes. This application does not limit the specific implementation of the regulation and control algorithm B. As shown in Figure 9, the acceleration of the vehicle under this control algorithm is changing, the acceleration is high at the beginning, and then the acceleration gradually decreases.
  • Fig. 10 and Fig. 11 are respectively the data schematic diagrams of the actual vehicle collection of automatic emergency braking (automatic emergency braking, AEB) function simulation test, wherein, Fig. 10 abscissa represents position, ordinate represents speed; Fig. 11 abscissa represents time, The ordinate represents velocity.
  • AEB automatic emergency braking
  • the simulation test method provided in the embodiment of the present application may use the position reaching the target position as a scene trigger condition, for example, the trigger position point shown in “trigger” in FIG. 10 .
  • the curve shown on the negative half-axis of the abscissa in Figure 10 is the initialization process.
  • the scene trigger condition is reached at 0 on the abscissa, and the simulation test is performed on the positive half-axis of the abscissa.
  • the simulation test method provided in the embodiment of the present application may use the speed reaching the target speed as the scene trigger condition.
  • the curve shown on the negative half-axis of the abscissa in Figure 11 is the initialization process.
  • the scene trigger condition is reached at 0 on the abscissa, and the simulation test is performed on the positive half-axis of the abscissa.
  • the curve shapes of multiple simulation tests are highly consistent, which can reflect the good effect brought by the test method provided by the embodiment of the present application, and can ensure that each test is carried out during multiple tests. consistency. As a result, more accurate test results can be provided, and it is convenient to carry out simulation tests on vehicles with different automatic driving systems, different models, and different conditions.
  • the simulation test system provided by the embodiment of this application introduces a positioning conversion module during the vehicle-in-the-loop test, which realizes a convenient and flexible test and makes the simulation test not limited to the test site; at the same time, by introducing the trigger information of the simulation scene , different regulation and control algorithms can be used at different stages, which reduces the influence of human factors in the vehicle-in-the-loop simulation test and ensures the reliability and consistency of multiple tests.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

本申请实施例提供一种车辆测试方法,通过建立仿真场景中虚拟车辆与实车测试车辆的位置变换关系,并且为仿真过程设计初始化过程和场景仿真过程,在不同过程采用不同的规控算法,通过场景触发信息切换规控算法,从而实现交通参与者丰富多样、场地约束限制小、测试结果可重复的车辆在环测试,有助于保证高级自动驾驶系统测试的一致性,能够准确再现关键路测场景。本申请提供的测试方法可以应用于智能汽车的测试。

Description

一种测试方法和系统 技术领域
本申请涉及汽车领域,尤其涉及一种测试方法和系统。
背景技术
高度智能的自动驾驶系统是未来自动驾驶技术的发展方向,而智能驾驶系统的测试是智能驾驶系统开发过程中的关键部分。软件在环(software in loop,SIL)、模型在环(module in loop,MIL)、硬件在环(hardware in loop,HIL)等多种技术手段可以用来加速自动驾驶系统算法迭代和测试验证。SIL仿真测试方便灵活、可重复性强、测试成本低、安全性好。然而,由于车辆数学建模不够精确,仿真所得结果与实际车辆运动轨迹并不一致。HIL台架造价昂贵,且难以考虑真实的车辆动力学。整车在环(vehicle in loop,VIL)测试采用真实车辆在环,弥补了SIL和HIL仿真车辆动力学模型不精确的不足。
另一方面,道路测试是高级智能驾驶系统的主要测试手段。道路测试过程中会遇到危险性高的极端场景(corner/edge case),此类场景路测难以重现。针对此类关键场景进行重建,并快速高效、低成本地进行整车在环测试对智能驾驶系统的算法开发、迭代和验证具有重要意义。但是,目前在进行整车在环测试中还没有针对此类重建场景的测试方法,并且还未有关于保证VIL测试结果一致性的方法。
发明内容
本申请提出一种灵活方便、测试成本低,且能够保证车辆在环测试的高可靠性测试系统。
第一方面,本申请实施例提供了一种车辆测试方法。本申请第一方面的第1种可能的实施方式提供的测试方法包括:获取第一位置信息,第一位置信息为车辆的位置信息;根据第一位置信息获取第二位置信息,第二位置信息为车辆在仿真场景中的位置信息。
根据第一方面第1种可能的实施方式,在第2种可能的实施方式中,根据第一位置信息获取第二位置信息,具体包括:根据第一位置信息和仿真场景信息获取位置转换关系;根据第一位置信息和位置转换关系获取第二位置信息。
通过采用定位转换关系的自动更新,解耦了仿真场景的位置限制,测试灵活方便,提升了测试效率且丰富了VIL测试内容,降低了测试成本。
根据第一方面第2种可能的实施方式,在第3种可能的实施方式中,在根据第一位置信息获取第二位置信息之后,方法还包括:根据第二位置信息更新仿真场景信息,仿真场景信息包括交通参与者信息;发送仿真场景信息。
本处需要说明的是,利用实车定位信息和场景信息,实现VIL测试时任意位置的场景加载和位置转换关系的自动更新;并且测试场地不受限制,可进行不同地面附着系数的测试,丰富了VIL测试的能力。
根据第一方面第2种或3种可能的实施方式,在第4种可能的实施方式中,仿真场景信息还包括场景触发信息。
根据第一方面第4种可能的实施方式,在第5种可能的实施方式中,场景触发信息用于切换测试阶段,测试阶段包括第一阶段和第二阶段;其中,车辆在第一阶段使用第一规划控制算法,车辆在第二阶段使用第二规划控制算法。方法还包括:根据场景触发信息由第一阶段进入第二阶段。
本处需要说明的是,本申请提供的方法通过在仿真初始化过程使用第一规控算法,以及基于场景触发信息的第二规控算法,降低了VIL测试过程中人为因素的影响。
根据第一方面第4种或第5种可能的实施方式,在第6种可能的实施方式中,场景触发信息包括以下至少一种:车辆触发信息、交通参与者触发信息和交通信号触发信息。
根据第一方面第6种可能的实施方式,在第7种可能的实施方式中,车辆触发信息包括设定的车辆运动状态,车辆运动状态包括以下至少一种:车辆的位置信息、车辆的运动信息、驾驶任务信息、自车子系统或零部件的状态。
根据第一方面第6种或第7种可能的实施方式,在第8种可能的实施方式中,交通参与者触发信息包括以下至少一种:设定的运动状态、设定的位置、在仿真场景中出现的时间。
根据第一方面第6种至第8种中任一种可能的实施方式,在第9种可能的实施方式中,交通信号触发信息包括以下至少一种:设定的交通信号灯信号、设定的交通标志信号、设定的交通标线信号、设定的交通管理者信号。
根据第一方面第1种至第9种中任一种可能的实施方式,在第10种可能的实施方式中,仿真场景是根据仿真场景库获得的,仿真场景库包括多个仿真场景,多个仿真场景包括根据道路测试数据获得的仿真场景。
本处需要说明的是,利用场景仿真的初始化过程和场景加载过程,通过场景触发信息保证了场景的一致性,可以降低VIL测试结果受不同车型和场景的影响。
本申请第二方面提供一种车辆测试系统,在第二方面的第1种可能的实施方式中,该系统包括:获取模块,获取模块用于获取第一位置信息,第一位置信息为车辆的位置信息;位置转换模块,位置转换模块用于根据第一位置信息获取第二位置信息,第二位置信息为车辆在仿真场景中的位置信息。
根据第二方面第1种可能的实施方式,在第2种可能的实施方式中,位置转换模块用于根据第一位置信息获取第二位置信息,具体包括:位置转换模块用于根据第一位置信息和仿真场景信息获取位置转换关系;位置转换模块用于根据第一位置信息和位置转换关系获取第二位置信息。
根据第二方面第2种可能的实施方式,在第3种可能的实施方式中,还包括仿真模块和发送模块,在位置转换模块根据第一位置信息获取第二位置信息之后,仿真模块用于根据第二位置信息更新仿真场景信息,仿真场景信息包括交通参与者信息;发送模块用于发送仿真场景信息。
根据第二方面第2种或第3种可能的实施方式,在第4种可能的实施方式中,仿真场景信息还包括场景触发信息。
根据第二方面第4种可能的实施方式,在第5种可能的实施方式中,场景触发信息用于切换测试阶段,测试阶段包括第一阶段和第二阶段;其中,车辆在第一阶段使用第一规划控制算法,车辆在第二阶段使用第二规划控制算法。
系统还包括:仿真触发模块,仿真触发模块用于控制车辆根据场景触发信息由第一阶段进入第二阶段。
本申请设计的VIL测试初始化过程的规控算法和算法切换模块,减少VIL测试时人为因素的干扰,保证场景触发时自车状态的一致,大大提升多次测试的可重复性和稳定性。
根据第二方面第4种或第5种可能的实施方式,在第6种可能的实施方式中,场景触发信息包括以下至少一种:车辆触发信息、交通参与者触发信息和交通信号触发信息。
根据第二方面第6种可能的实施方式,在第7种可能的实施方式中,车辆触发信息包括 设定的车辆运动状态,车辆运动状态包括以下至少一种:车辆的位置信息、车辆的运动信息、驾驶任务信息、自车子系统或零部件的状态。
根据第二方面第6种或第7种可能的实施方式,在第8种可能的实施方式中,交通参与者触发信息包括以下至少一种:设定的运动状态、设定的位置、在仿真场景中出现的时间。
根据第二方面第6种至第8种中任一项可能的实施方式,在第9种可能的实施方式中,交通信号触发信息包括以下至少一种:设定的交通信号灯信号、设定的交通标志信号、设定的交通标线信号、设定的交通管理者信号。
本申请利用仿真场景的初始化过程和场景仿真过程,以及场景触发信息和交通参与者的触发信息,降低了车型对场景交互结果的影响,采用丰富多样的触发信息可保证场景仿真的一致性的同时,提升仿真场景的多样性;
根据第二方面第1种可能的实施方式,在第9种可能的实施方式中,仿真场景是根据仿真场景库获得的,仿真场景库包括多个仿真场景,多个仿真场景包括根据道路测试数据获得的仿真场景。
本申请第三方面还提供一种计算机可读存储介质,该计算机可读存储介质存储有代码或指令,当代码或指令被运行时,执行第一方面第1种至第10种中任一项的方法。
本申请第四方面还提供一种车辆,在第四方面的第1种可能的实施方式中,该车辆可以获取仿真触发信息,根据仿真触发信息将场景触发信息由第一规控算法切换为第二规控算法,该第一规控算法用于使车辆状态达到预设条件,该第二规控算法用于仿真测试。
本申请提供的测试方法可以应用于交通参与者较多的复杂场景,可以考虑到高级自动驾驶遇到的多种类型参与者,多次交互的复杂情况。另外一方面,智能驾驶系统的迭代、更新以及验证必须保证同一场景可重复性,才能够评价、证明系统的能力。本申请提供的测试方法可以保证多次测试结果的一致性,降低测试成本。此外,通过位置转换模块将测试车辆投射到仿真场景中,还可以实现场地调整灵活方便、结果可重复性良好的整车在环测试。
附图说明
图1为本申请实施例提供的一种车辆功能示意图;
图2为本申请实施例提供的一种车辆架构示意图;
图3为本申请实施例提供的一种仿真系统架构示意图;
图4为本申请实施例提供的一种自车信息提取流程示意图;
图5为本申请实施例提供的一种位置转换原理示意图;
图6为本申请实施例提供的一种定位转换关系更新示意图;
图7为本申请实施例提供的一种场景测试流程示意图;
图8为本申请实施例提供的一种测试过程示意图;
图9为本申请实施例提供的一种测试初始化过程示意图;
图10为本申请实施例提供的一种路测提取场景多次测试速度曲线图;
图11为本申请实施例提供的一种多次测试速度曲线图。
具体实施方式
下面结合附图,对本申请的实施例进行描述,显然,所描述的实施例仅是本申请一部分的实施例,而不是全部的实施例。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
汽车正在电气化、网联化、智能化、共享化等的大潮中不断发展革新。图1为本申请实 施例提供的车辆100的一个功能示意图。车辆100可包括各种子系统,例如信息娱乐系统110、感知系统120、决策控制系统130、驱动系统140以及计算平台150。可选地,车辆100可包括更多或更少的子系统,并且每个子系统都可包括多个部件。另外,车辆100的每个子系统和部件可以通过有线或者无线的方式实现互连。
在一些实施例中,信息娱乐系统110可以包括通信系统111,娱乐系统112以及导航系统113。
通信系统111可以包括无线通信系统,无线通信系统可以直接地或者经由通信网络来与一个或多个设备进行无线通信。例如,无线通信系统可使用第三代(3th generation,3G)蜂窝通信技术,例如码分多址(code division multiple access,CDMA),或者第四代(4th generation,4G)蜂窝通信技术,例如长期演进(long time evolution,LTE)通信技术。或者第五代(5th generation,5G)蜂窝通信技术,例如新无线(new radio,NR)通信技术。无线通信系统可利用WiFi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统可利用红外链路、蓝牙或紫蜂(ZigBee)与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。
娱乐系统112可以包括中控屏,麦克风和音响,用户可以基于娱乐系统在车内收听广播,播放音乐;或者将手机和车辆联通,在中控屏上实现手机的投屏,中控屏可以为触控式,用户可以通过触摸屏幕进行操作。在一些情况下,可以通过麦克风获取用户的语音信号,并依据对用户的语音信号的分析实现用户对车辆100的某些控制,例如调节车内温度等。在另一些情况下,可以通过音响向用户播放音乐。
导航系统113可以包括由地图供应商所提供的地图服务,从而为车辆100提供行驶路线的导航,导航系统113可以和车辆的全球定位系统121、惯性测量单元122配合使用。地图供应商所提供的地图服务可以为二维地图,也可以是高精地图。
感知系统120可包括感测关于车辆100周边的环境的信息的若干种传感器。例如,感知系统120可包括全球定位系统121(全球定位系统可以是全球定位卫星(global position satellite,GPS)系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)122、激光雷达123、毫米波雷达124、超声雷达125以及摄像装置126。感知系统120还可包括被监视车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是车辆100的安全操作的关键功能。
定位系统121可用于估计车辆100的地理位置。惯性测量单元122用于基于惯性加速度来感测车辆100的位置和朝向变化。在一些实施例中,惯性测量单元122可以是加速度计和陀螺仪的组合。激光雷达123可利用激光来感测车辆100所位于的环境中的物体。在一些实施例中,激光雷达123可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。毫米波雷达124可利用无线电信号来感测车辆100的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达126还可用于感测物体的速度和/或前进方向。超声雷达125可以利用超声波信号来感测车辆100周围的物体。摄像装置126可用于捕捉车辆100的周边环境的图像信息。摄像装置126可以包括单目相机、双目相机、结构光相机以及全景相机等,摄像装置126获取的图像信息可以包括静态图像,也可以包括视频流信息。
决策控制系统130包括基于感知系统120所获取的信息进行分析决策的计算系统131,决策控制系统130还包括对车辆100的动力系统进行控制的整车控制器132,以及用于控制车辆100的转向系统133、加速踏板134(包括电动车的加速踏板或者燃油车的油门,这里是一个示例性的称呼)和制动系统135。
计算系统131可以操作来处理和分析由感知系统120所获取的各种信息以便识别车辆100周边环境中的目标、物体和/或特征。所述目标可以包括行人或者动物,所述物体和/或特征可包括交通信号、道路边界和障碍物。计算系统131可使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪等技术。在一些实施例中,计算系统131可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。计算系统131可以将所获取的各种信息进行分析并得出对车辆的控制策略。
整车控制器132可以用于对车辆的动力电池和驱动器141进行协调控制,以提升车辆100的动力性能。
转向系统133可操作来调整车辆100的前进方向。例如在一个实施例中可以为方向盘系统。加速踏板134用于控制驱动器141的操作速度并进而控制车辆100的速度。
制动系统135用于控制车辆100减速。制动系统135可使用摩擦力来减慢车轮144。在一些实施例中,制动系统135可将车轮144的动能转换为电流。制动系统135也可采取其他形式来减慢车轮144转速从而控制车辆100的速度。
驱动系统140可包括为车辆100提供动力运动的组件。在一个实施例中,驱动系统140可包括驱动器141、能量源142、传动系统143和车轮144。驱动器141可以是内燃机、电动机、空气压缩引擎或其他类型的引擎组合,例如汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。驱动器141将能量源142转换成机械能量。
能量源142的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源142也可以为车辆100的其他系统提供能量。
传动装置143可以将来自驱动器141的机械动力传送到车轮144。传动装置143可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置143还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。
车辆100的部分或所有功能受计算平台150控制。计算平台150可包括至少一个处理器151,处理器151可以执行存储在例如存储器152这样的非暂态计算机可读介质中的指令153。在一些实施例中,计算平台150还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。
处理器151可以是任何常规的处理器,如中央处理单元(central process unit,CPU)。替选地,处理器151还可以包括诸如图像处理器(graphic process unit,GPU),现场可编程门阵列(field programmable gate array,FPGA)、片上系统(sysem on chip,SOC)、专用集成芯片(application specific integrated circuit,ASIC)或它们的组合。尽管图1功能性地图示了处理器、存储器、和在相同块中的计算机110的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机110的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述 处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,存储器152可包含指令153(例如,程序逻辑),指令153可被处理器151执行来执行车辆100的各种功能。存储器152也可包含额外的指令,包括向信息娱乐系统110、感知系统120、决策控制系统130驱动系统140中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令153以外,存储器152还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100和计算平台150使用。
计算平台150可基于从各种子系统(例如,驱动系统140、感知系统120和决策控制系统130)接收的输入来控制车辆100的功能。例如,计算平台150可利用来自决策控制系统130的输入以便控制转向系统133来避免由感知系统120检测到的障碍物。在一些实施例中,计算平台150可操作来对车辆100及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,存储器152可以部分或完全地与车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
可选地,可以将车辆100配置为完全或部分自动驾驶模式。例如:车辆100可以通过感知系统120获取其周围的环境信息,并基于对周边环境信息的分析得到自动驾驶策略以实现完全自动驾驶,或者将分析结果呈现给用户以实现部分自动驾驶。
在道路行进的自动驾驶汽车,如上面的车辆100,可以识别其周围环境内的物体以确定对当前速度的调整。所述物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定自动驾驶汽车所要调整的速度。
可选地,车辆100或者与车辆100相关联的感知和计算设备(例如计算系统131、计算平台150)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测所述识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。车辆100能够基于预测的所述识别的物体的行为来调整它的速度。换句话说,自动驾驶汽车能够基于所预测的物体的行为来确定车辆将需要调整到哪种状态(例如,加速、减速、或者停止)。在这个过程中,也可以考虑其它因素来确定车辆100的速度,诸如,车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。
除了提供调整自动驾驶汽车的速度的指令之外,计算设备还可以提供修改车辆100的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车等,本申请实施例不做特别的限定。
本申请实施例可以应用于如图2所示的架构中。如图2所示,车辆100包括多个车辆集成单元(vehicle integration unit,VIU)11,通信盒子(telematic box,T-BOX)12,座舱域控制器(cockpit domain controller,CDC),移动数据中心(mobile data center,MDC)14,整车控制器(vehicle domain controller,VDC)15。车辆100还包括设置在车身上的多种类型的传感器,包括:激光雷达21,毫米波雷达22,超声雷达23,摄像装置24。每种类型的传感器可以包括多个。应当理解的是,虽然图2中示出了不同的传感器在车辆100上的位置布局,但是图2中的传感器数量和位置布局仅为一种示意,本领域人员可以依据需要合理地选择传感器的种类、数量和位置布局。
在图2中示出了四个VIU,应当理解的是,图2中的VIU的数量和位置仅为一种示例,本领域技术人员可以依据实际需求选择合适的VIU的数量和位置。车辆集成单元VIU 11为多个车辆零部件提供车辆零部件所需的部分或全部的数据处理功能或控制功能。VIU可以具有以下多种功能中的一种或多种。
1、电子控制功能,即VIU用于实现部分或全部车辆零部件内部的电子控制单元(electronic control unit,ECU)提供的电子控制功能。例如,某一车辆零部件所需的控制功能,又例如,某一车辆零部件所需的数据处理功能。
2、与网关相同的功能,即VIU还可以具有部分或全部与网关相同的功能,例如,协议转换功能、协议封装并转发功能以及数据格式转换功能。
3、跨车辆零部件的数据的处理功能,即对从多个车辆零部件的执行器获取的数据进行处理、计算等。
需要说明的是,上述功能中涉及的数据,可以包括车辆零部件中执行器的运行数据,例如,执行器的运动参数,执行器的工作状态等。上述功能中涉及的数据还可以是通过车辆零部件的数据采集单元(例如,敏感元件)采集的数据,例如,通过车辆的敏感元件采集的车辆所行驶的道路的道路信息,或者天气信息等,本申请实施例对此不做具体限定。
在图2的车辆100示例中,车辆100可以分为多个域(domain),每个域都有独立的域控制器(domain controller),具体地,在图2中,示出了两种域控制器:座舱域控制器CDC 13和整车域控制器VDC 15。
座舱域控制器CDC 13可用于实现车辆100座舱区域的功能控制,在座舱区域的车辆部件可以包括抬头显示装置(head up display,HUD)、仪表盘、收音机、中控屏幕、导航、摄像头等。
整车域控制器VDC 15可用于对车辆的动力电池和驱动器141进行协调控制,以提升车辆100的动力性能,在于一些实施例中,图1中的整车控制器132可以实现VDC的各种功能。
图2中还示出了车联网设备T-BOX 12和移动数据中心MDC 13。T-BOX 12可用于实现车辆100和车辆内部以及外部设备的通信连接。T-BOX可以通过车辆100的总线获取车内设备数据,也可以通过无线网络和用户的手机通信连接,在一些实施例中,T-BOX 12可以被包括在图1的通信系统111中。移动数据中心MDC 13用于基于环境感知定位、智能规划决策和车辆运动控制等核心控制算法,输出驱动、传动、转向和制动等执行控制指令,实现车辆100的自动控制,还能够通过人机交互界面,实现车辆驾驶信息的人机交互。在一些实施例中,图1中的计算平台150可以实现MDC 13的各种功能。
在图2中的四个VIU 11形成环形拓扑连接网络,每个VIU 11与其近邻位置的传感器通信连接,T-BOX 12、CDC 13、MDC 14以及VDC 15与VIU的环形拓扑连接网络通信连接。VIU 11可以从各传感器获取信息,并将获取的信息上报给CDC 13、MDC 14以及VDC 15。 借由环形拓扑网络,T-BOX 12、CDC 13、MDC 14以及VDC 15之间也可以实现相互的通信。
应当理解的是,上述环形拓扑连网络接仅是一种示意,本领域技术人员可以依据需求选择其它合适的VIU连接方式。
VIU之间的连接可以采用例如以太网(ethernet),VIU和T-BOX 12、CDC 13、MDC 14以及VDC 15的连接可以采用例如以太网或快捷外围部件互连(peripheral component interconnect express,PCIe)技术,VIU和传感器之间的连接可以采用例如控制器局域网络(controller area network,CAN),局域互联网络(local interconnect network,LIN),FlexRay,面向媒体的系统传输(media oriented system transport,MOST)等。
结合上述描述,本申请实施例提供了一种仿真测试系统,该系统基于场景触发机制,具有高可靠性,并且能够实现车辆在环测试,可应用于图1或图2中所示的车辆100的仿真测试中。
一种可能的实施方式,本申请实施例提供的仿真测试系统可以包括智能驾驶中央处理器、车载定位与感知设备单元,数据平台与仿真平台。
其中,一种可能的实施方式,智能驾驶中央处理器可以是自动驾驶系统,可以是如图1所示的计算平台,也可以是如图2所示的MDC平台,或者是其他负责自动驾驶或者智能驾驶辅助计算的域控制器。本申请对此不做限制。
一种可能的实施方式,本申请实施例提供的仿真测试系统中的定位与感知设备单元可以是如图1所示的感知系统120中的传感器或装置。
一种可能的实施方式,数据平台和仿真平台可以是测试场地提供的服务器,也可以是地理位置与测试场地解耦的云端服务器,该云端服务器可以是实际的服务器,也可以是虚拟的服务器,本申请对此不做限制。
本申请实施例主要从以下几个方面对本申请提供的一种仿真测试系统进行介绍:
首先,对本申请提供方案的系统架构图进行介绍。
图3为本申请实施例提供的仿真系统架构示意图。如图3所示,仿真系统架构可以包括测试场景库、仿真系统、自动驾驶系统(ADS-Auto Driving System)、自动驾驶车辆等几个方面。需要说明的是,本申请实施例提供的测试方法可以应用于自动驾驶系统的测试,在此之外,本申请实施例提供的测试方法还可以对高级驾驶辅助系统(Advanced Driving Assistance System)等其他系统进行测试。本申请不对待测系统的类型进行限定。同时,还需要说明的是,本申请说明书中所描述的场景可以理解为车辆以及车辆周围的环境、道路、天气、交通信号、交通参与者等多种信息与约束条件的集合。例如,场景可以是根据道路类型划分的高速公路场景、城市道路场景、越野道路场景、沙漠道路场景等;又如,场景可以是根据功能或服务划分的加油站场景、购物中心场景、停车库场景等;此外,场景还可以是根据驾驶任务或车辆功能应用划分的自动驾驶场景、自动泊车场景、紧急制动场景、防抱死场景、防侧滑场景等。本申请说明书中所述的仿真场景可以为通过计算机呈现的场景,例如可以通过计算机模拟真实世界的场景,或者通过在真实世界中采集场景数据并通过仿真工具再现真实世界中的场景。本申请对此不作限制。
如图3所示,测试场景库中包括多种场景。其中,测试场景库中的场景可以为通过道路测试获得的场景,也可以是通过其他途径获得的场景。例如,可以通过实车路测采集数据,并提取路测数据以形成道路测试的场景,并将通过路测获得的场景加入测试场景库。此外,测试场景库也可以包括其他公开可获得的场景,例如可以是共享平台提供的场景,也可以是通过其他商业可获得途径得到的场景。
需要说明的是,测试场景库可以保存在本地,例如仿真测试系统、工作站等。同时,测试场景库还可以保存在云端服务器,场景库可以实时更新,且可以供多个仿真测试平台共享。
如图3所示,一种可能的实施方式,本申请所描述的仿真场景信息包括场景控制信息、场景触发信息以及交通参与者信息。
其中,场景控制信息可以来自场景控制文件。场景控制文件可以包括某个待仿真场景的数据,当场景控制信息被加载时,可以生成仿真场景中的道路、交通设施、障碍物、交通参与者等。场景控制信息可以理解为用于计算机或仿真器生成仿真场景所需要的信息,或场景中的环境、道路、交通设施、障碍物等信息。
场景触发信息可以用于切换规划控制算法。规划控制算法在本申请说明书中也称为规控算法,规控算法用于车辆的规划控制,使车辆能够实现沿着规划控制的路径行驶,或根据规划控制的结果执行加速、制动等动作。本申请不对具体的驾驶任务或者车辆执行的动作做限制。当特定条件被触发时,或满足场景触发信息所指示的条件时,可以通过切换控制算法以进行仿真测试。
一种可能的实施方式,本申请实施例提供的场景触发信息可以包括自车触发信息、交通参与者触发信息、交通信号触发信息等。
自车触发信息(或车辆触发信息)可以为自车满足特定条件时的状态信息。需要说明的是,本申请说明书中所描述的车辆有时也描述为自车,在不作特别说明的情况下,两者可以理解为具有相同含义。
一种可能的实施方式,自车触发信息可以是自动驾驶车辆的位置、运动状态、驾驶任务等一个或多个方面满足特定条件的信息。其中,运动状态可以包括位置、速度、加速度等。例如,自车触发信息可以是车辆的位置为到达A点;又如,自车触发信息可以是车辆的速度到达5m/s;又如,自车触发信息可以是车辆的加速度到达2m/s 2;又如,触发信息可以是车辆到达A点且速度到达5m/s;又如,触发信息可以是车辆到达A点且加速度到达2m/s 2;又如,触发信息可以是车辆到达A点、速度到达5m/s且加速度到达2m/s 2。自车触发信息可以用于自车路径规划、以及用于仿真场景开始时刻自动驾驶车辆运动状态的控制等。
另一种可能的实施方式,自车触发信息还可以是自车的子系统或零部件的状态,例如可以是自车对外发送的声光信号状态,包括左转信号灯指示、右转信号指示、刹车信号指示、紧急或故障信号指示、鸣笛信号、让行信号、自动驾驶状态信号、作业状态信号、其他指示信号等;又如可以是车门车窗状态、空调状态、电机状态、电池状态、热管理系统状态等。或者,还可以是车辆悬架状态、制动系统状态、转向系统状态等。自车触发信息可以是自车及子系统、零部件等的状态满足特定条件的信息,本申请不作限制。
另一种可能的实施方式,自车触发信息还可以是驾驶任务信息,例如可以是自动驾驶任务,例如也可以是自动紧急制动任务或其他任务,本申请对此不做限制。
交通参与者触发信息为交通参与者的状态满足特定条件的信息。交通参与者可以包括其他机动车、非机动车、行人、动物等,本申请实施例对交通参与者不作限制。交通参与者可能会对自车的规划、决策、控制等产生影响。交通参与者触发信息可以用于控制交通参与者注入仿真场景的时机。当满足触发条件时,可以向仿真场景注入特定的交通参与者。交通参与者触发信息可以用于场景的更新。当满足触发条件时,更新仿真场景的信息。
一种可能的实施方式,交通参与者信息可以是其他机动车、非机动车、行人、动物等的位置、速度、加速度以及这些变量的组合,具体可参考自车触发信息的描述,本处不再赘述。另一种可能的实施方式,交通参与者触发信息可以是其他机动车的子系统或零部件的状态满 足特定条件的信息,也可以是其他非机动车的子系统或零部件的状态满足特定条件的信息,例如可以是其他机动车或非机动车对外发送的声光信号状态,包括左转信号灯指示、右转信号指示、刹车信号指示、紧急或故障信号指示、鸣笛信号、让行信号、自动驾驶状态信号、作业状态信号、其他指示信号等;又如可以是车门车窗状态、空调状态、电机状态、电池状态、热管理系统状态等。交通参与者触发信息还可以是行人的动作、手势、声音、表情等。
交通信号触发信息可以为交通信号变化为某一个状态,或者出现新的交通信号。交通信号可以包括交通信号灯的信号、交通指挥或交通管理者的信号、交通标志信号、交通标线信号等。其中,交通信号灯的信号可以包括绿灯、黄灯、红灯、箭头灯、叉形灯、警示灯、机动车信号灯、非机动车信号灯等。交通标志信号可以包括限速指示信号、解除限速指示信号、道路指引信号、车道变化指示信号、警告标志、禁令标志、指示标志、距离标志、道路施工安全标志等。交通标线信号可以包括指示标线、禁止标线、警告标线等。交通管理者的信号可以包括停止、直行、左转弯、右转弯、等待、变道、减速慢行、靠边停车等。
一种可能的实施方式,交通信号触发信息可以是交通信号灯的状态变为指示通行,或者,交通信号灯的状态变为指示禁止通行。另一种可能的实施方式,交通信号触发信息可以是出现了限速60km/h指示牌,或者是出现了解除限速60km/h指示牌。另一种可能的实施方式,交通信号触发信息可以是出现了用于指示合流的车道线,或者是潮汐车道指示牌将第一车道由直行车道切换为左转车道。另一种可能的实施方式,交通信号触发信息可以为交通管理者指示停车,或者为交通管理者指示前行至第一地点。交通信号触发信息可以用于控制仿真场景中的交通信号。
需要说明的是,本处提供的场景触发信息仅作示例,本申请对此不做限制。应理解,本领域技术人员不经创造性劳动可想到的其他实施方式均属于本申请技术方案所涵盖的范围。
如图3所示,本申请所使用的仿真系统在加载仿真场景后,定位转换模块利用场景控制信息和测试车辆当前的定位信息,获取自车位置在仿真场景和真实场景之间的转换关系,并将自车映射到仿真场景中。可以理解的是,当自车在实际测试场地中运动时,仿真场景中自车所映射的仿真车辆的位置也会根据自车在实际场地的位置和位置转换关系而变化。仿真系统利用场景触发信息、交通参与者信息和自车定位投射后的位置信息,更新仿真场景。并且,仿真系统将仿真场景信息实时发送给车辆,例如可以发送给车辆的自动驾驶系统或者驾驶辅助系统。通过上述过程,实现仿真场景的更新与车辆在环测试。
本处需要指出的是,仿真系统与自动驾驶系统之间的信息传输可以通过有线或无线的方式完成。例如可以通过如图1中所示的通信系统可以使用的无线通信方式完成,本处不再赘述。仿真场景信息可以通过无线的方式传输到车辆。或者,另一种可能的实施方式,仿真系统可以安装在车辆上,并且与车辆采用有线连接。仿真系统通过有线连接将仿真场景信息发送给车辆。可选地,仿真系统还可以和云端通信,并通过云端存储的仿真场景库刷新仿真系统中的仿真场景库。
需要说明的是,在本申请说明书中,车辆有时也描述为自车,在没有特殊说明的情况下,两者可以理解为相同的含义。
图4为本申请实施例提供的位置转换模块的原理与转换关系更新流程图。如图4所示,在进行车辆在环测试过程中,仿真系统加载仿真场景后,由仿真系统确定是否更新位置转换关系。如需更新,仿真系统根据仿真场景信息和当前时刻定位信息获得新的位置转换关系。一种可能的实施方式,具体地,仿真系统可以根据场景控制信息和车辆的定位信息获得新的位置转换关系。在获得新的位置转换关系后,仿真系统根据车辆的位置信息和新的位置转换 关系获得仿真场景中仿真车辆的位置信息。
一种可能的实施方式,车辆的位置信息可以包括车辆的位置和车辆在该位置下的姿态,例如可以是车辆的朝向,例如朝东,朝西,或朝东偏南5度,或朝东偏北3度。
一种可能的实施方式,车辆在实际测试场地的位置变化和运动状态与车辆投射在仿真场景中的位置和运动关系完全对应。例如,车辆在实际测试场地中从第一位置运动到第二位置,车辆在仿真场景中投影的位置从第三位置变化为第四位置;并且,车辆在仿真场景中从第一位置运动到第二位置过程中的运动状态的变化过程与车辆在实际测试场地中从第三位置运动到第四位置过程中的运动状态的变化过程完全一致。并且,在实际测试场地中第一位置相对于第二位置的相对位置关系,与在仿真场景中第三位置与第四位置的相对位置关系完全一致。
图5为位置转换原理示意图。如图5所示,在另一种可能的实施方式中,车辆在实际测试场地的位置变化和运动状态与车辆投射在仿真场景中的位置和运动关系可以不完全对应。例如,车辆在仿真场景中投射的位置变化可以与车辆在实际测试场地的位置变化不同。如图5所示,车辆在测试场地中由实线框所示的第一位置移动到虚线所示的第二位置;车辆在仿真场景中由实线框所示的第三位置移动到虚线框所示的第四位置。如图5所示,车辆在实际测试场地中由第一位置移动到第二位置的向量如实线箭头所示;车辆在仿真场景中由第三位置移动到第四位置的向量如虚线箭头所示。一种可能的实施方式,实线箭头所示的向量与虚线箭头所示的向量可以不一致,车辆在实际测试场地的位置变化可以与在仿真场景中的位置变化不一致。如图5所示,车辆在仿真场景中由第三位置变化到第四位置过程中的位置变化过程也可以与车辆在测试场地中由第一位置变化到第二位置过程中的位置变化过程不同。
图6为在场地测试中多次仿真场景测试过程中位置转换关系的更新原理示意图。仿真系统包括定位转换模块,该定位转换模块用于根据车辆的位置信息和仿真场景信息获得位置转换关系,一种可能的实施方式,仿真场景信息包括场景控制信息,定位转换模块可以用于根据车辆的位置信息和场景控制信息获得位置转换关系。例如,在实线框所示的位置,仿真系统根据车辆定位信息和场景控制信息获得第一位置转换关系;在虚线框所示的位置,仿真系统根据车辆新的定位信息和场景控制信息获得第二位置转换关系。第一位置转换关系和第二位置转换关系可以不同。因此,在同一个测试中,车辆在测试场地中位置以及车辆在仿真场景中位置之间的位置转换关系可以相同也可以不同;在多次不同的测试中,车辆在测试场地中位置以及车辆在仿真场景中位置之间的位置转换关系可以相同也可以不同。此外,本申请实施例提供的测试方法中,位置转换关系可以根据测试场地的实际情况变化,例如当测试场地面积较小时,通过调整位置转换关系,可以实现车辆在面积较小的测试场地中完成在仿真场景中较大场地面积的测试。本申请实施例提供的位置转换模块可实现不受场地约束的车辆在环测试,为测试带来更高的灵活性。
对于被测仿真场景,控制场景内交通参与者按照预定轨迹运动;利用场景控制信息和测试车辆的定位信息建立测试车辆与仿真系统内虚拟自动驾驶车辆的位置关系,使测试不受场景限制;根据场景触发信息设计场景的初始化过程和仿真过程,在场景初始化阶段和场景仿真阶段采用不同的规控算法,剔除人为干扰,保证多次测试结果的可重复性。
图7为本申请实施例提供的一种仿真测试流程示意图。如图7所示,在加载仿真场景阶段,仿真系统加载场景控制信息后获得仿真场景,仿真系统利用自车定位信息和场景触发信息,更新仿真场景内交通参与者状态信息(位置、速度、朝向等),并将仿真场景信息发送给车辆,一种可能的实施方式,可以将仿真场景信息发送给自动驾驶系统;所发送的场景信息可以被自动驾驶系统识别,以使自动驾驶系统认为车辆在仿真场景所对应的真实世界中的场 景行驶。一种可能的实施方式,车辆的自动驾驶系统根据场景触发状态选择不同的规划控制算法,并将控制指令发送给车辆。场景触发状态可以包括场景被触发、场景未被触发。车辆不断将定位与运动信息发送给自动驾驶系统,用于自动驾驶或驾驶辅助功能的计算;并且,车辆将自车定位信息发送给仿真系统,仿真系统根据自车定位信息更新场景,并检测是否达到场景触发条件。。
本申请所采用的场景信息包含场景内自车相关信息、场景触发信息,交通参与者信息。自车相关信息用于更新定位转换关系,场景触发信息用于切换规划控制算法,交通参与者信息用于场景更新。需要说明的是,本申请中的规划控制算法也称为规控算法,两者含义相同。
如图7所示,规控算法A用于场景测试阶段,对自动驾驶系统进行测试;规控算法B用于场景初始化阶段,使自车达到场景触发目标状态。具体地,如图7所示,当未触发场景时,或者说,当未达到场景触发条件时,自动驾驶系统采用规划控制算法B以实现初始化过程。一种可能的实施方式,采用规划控制算法B可以使车辆达到场景触发条件要求的运动状态或者定位。如图7所示,当车辆达到场景触发条件时,自动驾驶系统采用规控算法A以进入场景仿真测试过程。一种可能的实施方式,规划控制算法A可以为待仿真测试的算法。通过循环上述仿真过程可以完成场景的仿真。
本申请中提到的场景触发信息可以是自动驾驶车辆的位置、速度、加速度以及这些变量的组合。可参考上文所述,本处不再赘述。
图8为本申请实施例提供的一种仿真测试方法过程示意图。如图8所示,本申请提供的测试方法可以包括以下过程:加载仿真场景过程、初始化过程、场景仿真过程。在本申请中,过程有时也称为阶段。以下详细介绍。
首先,加载仿真场景,并更新位置转换关系。场景的加载可以由仿真系统根据场景控制信息获得仿真场景而实现。接着,进行初始化过程,采用规控算法B使车辆在仿真场景中达到触发条件。当车辆在仿真场景中达到触发条件时,进入到场景仿真过程。之后,在场景仿真过程中,自动驾驶系统执行规控算法A。一种可能的实施方式,当场景仿真过程包括多个过程时,可以使用不同的规控算法。场景仿真过程还可以包括第一测试目标、第二测试目标、第三测试目标等。每一个目标下的规控算法也可以不同,例如在第一测试目标下以规控算法C进行测试,在第二测试目标下以规控算法D进行测试。在第三测试目标下以规控算法E进行测试。以此类推,直至所有测试目标测试完成,仿真测试结束。
图9为本申请实施例提供的一种初始化过程示意图。
一种可能的实施方式,其横坐标表示时间,纵坐标表示速度。例如,场景触发条件可以为车辆在仿真场景中的速度满足一定条件。如图9所示,车辆由静止开始加速,速度由0按照一定曲线加速至如图中“init Tri Pt”所示的初始触发点,并按照一定曲线减速至满足场景触发条件的场景触发点,如图9中“scen Tri Pt”所示。需要说明的是,在初始化过程中,车辆采用规控算法B来使车辆达到触发条件。例如,当触发条件是车辆速度达到10m/s,则规控算法B可以为以恒定加速度使车辆加速的控制算法,例如可以为2m/s 2的加速度使车辆加速,直至车辆达到10m/s的速度。规控算法B还可以为加速度变化的加速控制算法。本申请对规控算法B的具体实现方式不作限定。如图9所示,车辆在该规控算法下的加速度是变化的,开始时刻加速度大,随后加速度逐渐减小。
图10和图11分别为自动紧急制动(automatic emergency braking,AEB)功能仿真测试的实际车辆采集的数据示意图,其中,图10横坐标表示位置,纵坐标表示速度;图11横坐标表示时间,纵坐标表示速度。
如图10所示,一种可能的实施方式,本申请实施例提供的仿真测试方法可以采用位置达到目标位置作为场景触发条件,例如图10中“trigger”所示的触发位置点。图10中横坐标的负半轴所示的曲线为初始化过程,在横坐标为0处达到场景触发条件,在横坐标的正半轴进行仿真测试。
另一种可能的实施方式,如图11所示,本申请实施例提供的仿真测试方法可以采用速度达到目标速度作为场景触发条件。图11中横坐标的负半轴所示的曲线为初始化过程,在横坐标为0处达到场景触发条件,在横坐标的正半轴进行仿真测试。
如图10和图11所示,多次仿真测试的曲线形状高度吻合,由此可以体现本申请实施例提供的测试方法所带来的良好效果,能够在多次测试的过程中保证每一次测试的一致性。由此,可以提供更准确的测试结果,方便对不同自动驾驶系统、不同车型、不同条件下的车辆进行仿真测试。
综上,本申请实施例提供的仿真测试系统在车辆在环测试时引入了定位转换模块,实现了方便灵活的测试,使仿真测试不受限于测试场地;同时,通过引入仿真场景的触发信息,能够在不同的阶段采用不同的规控算法,减少了车辆在环仿真测试中人为因素的影响,保证多次测试的可靠性和一致性。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述一种可能的实施方式的保护范围为准。

Claims (21)

  1. 一种车辆测试方法,其特征在于,所述方法包括:
    获取第一位置信息,所述第一位置信息为所述车辆的位置信息;
    根据所述第一位置信息获取第二位置信息,所述第二位置信息为所述车辆在仿真场景中的位置信息。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一位置信息获取第二位置信息,具体包括:
    根据所述第一位置信息和仿真场景信息获取位置转换关系;
    根据所述第一位置信息和所述位置转换关系获取所述第二位置信息。
  3. 根据权利要求2所述的方法,其特征在于,在所述根据所述第一位置信息获取第二位置信息之后,所述方法还包括:
    根据所述第二位置信息更新所述仿真场景信息,所述仿真场景信息包括交通参与者信息;
    发送所述仿真场景信息。
  4. 根据权利要求2或3所述的方法,其特征在于,所述仿真场景信息还包括场景触发信息。
  5. 根据权利要求4所述的方法,其特征在于,所述场景触发信息用于切换测试阶段,所述测试阶段包括第一阶段和第二阶段;其中,所述车辆在所述第一阶段使用第一规划控制算法,所述车辆在所述第二阶段使用第二规划控制算法;
    所述方法还包括:根据所述场景触发信息由所述第一阶段切换为所述第二阶段。
  6. 根据权利要求4或5所述的方法,其特征在于,所述场景触发信息包括以下至少一种:车辆触发信息、交通参与者触发信息和交通信号触发信息。
  7. 根据权利要求6所述的方法,其特征在于,所述车辆触发信息包括设定的车辆运动状态,所述车辆运动状态包括以下至少一种:所述车辆的位置信息、所述车辆的运动信息、驾驶任务信息、自车子系统或零部件的状态信息。
  8. 根据权利要求6或7所述的方法,其特征在于,所述交通参与者触发信息包括以下至少一种:设定的运动状态、设定的位置、在仿真场景中出现的时间。
  9. 根据权利要求6至8任一项所述的方法,其特征在于,所述交通信号触发信息包括以下至少一种:设定的交通信号灯信号、设定的交通标志信号、设定的交通标线信号、设定的交通管理者信号。
  10. 根据权利要求1至9任一项所述的方法,其特征在于,所述仿真场景是根据仿真场景库获得的,所述仿真场景库包括多个仿真场景,所述多个仿真场景包括根据道路测试数据获得的仿真场景。
  11. 一种车辆测试系统,其特征在于,包括:
    获取模块,所述获取模块用于获取第一位置信息,所述第一位置信息为所述车辆的位置信息;
    位置转换模块,所述位置转换模块用于根据所述第一位置信息获取第二位置信息,所述第二位置信息为所述车辆在仿真场景中的位置信息。
  12. 根据权利要求11所述的系统,其特征在于,所述位置转换模块用于根据所述第一位置信息获取第二位置信息,具体包括:
    所述位置转换模块用于根据所述第一位置信息和仿真场景信息获取位置转换关系;
    所述位置转换模块用于根据所述第一位置信息和所述位置转换关系获取所述第二位置信息。
  13. 根据权利要求12所述的系统,其特征在于,还包括仿真模块和发送模块,在所述位 置转换模块根据所述第一位置信息获取第二位置信息之后,所述仿真模块用于根据所述第二位置信息更新所述仿真场景信息,所述仿真场景信息包括交通参与者信息;
    所述发送模块用于发送所述仿真场景信息。
  14. 根据权利要求12或13所述的系统,其特征在于,所述仿真场景信息还包括场景触发信息。
  15. 根据权利要求14所述的系统,其特征在于,所述场景触发信息用于切换测试阶段,所述测试阶段包括第一阶段和第二阶段;其中,所述车辆在所述第一阶段使用第一规划控制算法,所述车辆在所述第二阶段使用第二规划控制算法;
    所述系统还包括:切换模块,所述切换模块用于根据所述场景触发信息由所述第一阶段切换为所述第二阶段。
  16. 根据权利要求14或15所述的系统,其特征在于,所述场景触发信息包括以下至少一种:车辆触发信息、交通参与者触发信息和交通信号触发信息。
  17. 根据权利要求16所述的方法,其特征在于,所述车辆触发信息包括设定的车辆运动状态,所述车辆运动状态包括以下至少一种:所述车辆的位置信息、所述车辆的运动信息、驾驶任务信息、自车子系统或零部件的状态信息。
  18. 根据权利要求16或17所述的系统,其特征在于,所述交通参与者触发信息包括以下至少一种:设定的运动状态、设定的位置、在仿真场景中出现的时间。
  19. 根据权利要求16至18任一项所述的系统,其特征在于,所述交通信号触发信息包括以下至少一种:设定的交通信号灯信号、设定的交通标志信号、设定的交通标线信号、设定的交通管理者信号。
  20. 根据权利要求11至19任一项所述的系统,其特征在于,所述仿真场景是根据仿真场景库获得的,所述仿真场景库包括多个仿真场景,所述多个仿真场景包括根据道路测试数据获得的仿真场景。
  21. 一种计算机可读存储介质,其特征在于,存储有代码或指令,当所述代码或指令被运行时,执行如权利要求1至10任一项所述的方法。
PCT/CN2021/115752 2021-08-31 2021-08-31 一种测试方法和系统 WO2023028858A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2021/115752 WO2023028858A1 (zh) 2021-08-31 2021-08-31 一种测试方法和系统
CN202180003497.0A CN113892088A (zh) 2021-08-31 2021-08-31 一种测试方法和系统
EP21955420.1A EP4379558A1 (en) 2021-08-31 2021-08-31 Test method and system
US18/590,616 US20240202401A1 (en) 2021-08-31 2024-02-28 Test method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/115752 WO2023028858A1 (zh) 2021-08-31 2021-08-31 一种测试方法和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/590,616 Continuation US20240202401A1 (en) 2021-08-31 2024-02-28 Test method and system

Publications (1)

Publication Number Publication Date
WO2023028858A1 true WO2023028858A1 (zh) 2023-03-09

Family

ID=79016721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115752 WO2023028858A1 (zh) 2021-08-31 2021-08-31 一种测试方法和系统

Country Status (4)

Country Link
US (1) US20240202401A1 (zh)
EP (1) EP4379558A1 (zh)
CN (1) CN113892088A (zh)
WO (1) WO2023028858A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116449806A (zh) * 2023-06-14 2023-07-18 中汽智联技术有限公司 基于安全层信息的车辆信息融合控制功能测试方法和系统
CN117094182A (zh) * 2023-10-19 2023-11-21 中汽研(天津)汽车工程研究院有限公司 V2v交通场景构建方法及v2x虚实融合测试系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114169083B (zh) * 2022-02-11 2022-05-20 深圳佑驾创新科技有限公司 一种自动紧急刹车系统数据分析方法、装置、设备、介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109557904A (zh) * 2018-12-06 2019-04-02 百度在线网络技术(北京)有限公司 一种测试方法、装置、设备和介质
CN111226268A (zh) * 2017-05-02 2020-06-02 密歇根大学董事会 用于自动驾驶车辆的模拟车辆交通
CN111309600A (zh) * 2020-01-21 2020-06-19 上汽通用汽车有限公司 虚拟场景注入自动驾驶测试方法及电子设备
CN112639793A (zh) * 2020-08-05 2021-04-09 华为技术有限公司 一种自动驾驶车辆的测试方法及装置
CN112671487A (zh) * 2019-10-14 2021-04-16 大唐移动通信设备有限公司 一种车辆测试的方法、服务器以及测试车辆
CN113077655A (zh) * 2021-03-18 2021-07-06 重庆车辆检测研究院有限公司 基于边缘计算的v2x场地在环测试方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3121729B1 (en) * 2015-07-21 2018-09-26 Tata Elxsi Limited System and method for enhanced emulation of connected vehicle applications
CN112819968B (zh) * 2021-01-22 2024-04-02 北京智能车联产业创新中心有限公司 基于混合现实的自动驾驶车辆的测试方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111226268A (zh) * 2017-05-02 2020-06-02 密歇根大学董事会 用于自动驾驶车辆的模拟车辆交通
CN109557904A (zh) * 2018-12-06 2019-04-02 百度在线网络技术(北京)有限公司 一种测试方法、装置、设备和介质
CN112671487A (zh) * 2019-10-14 2021-04-16 大唐移动通信设备有限公司 一种车辆测试的方法、服务器以及测试车辆
CN111309600A (zh) * 2020-01-21 2020-06-19 上汽通用汽车有限公司 虚拟场景注入自动驾驶测试方法及电子设备
CN112639793A (zh) * 2020-08-05 2021-04-09 华为技术有限公司 一种自动驾驶车辆的测试方法及装置
CN113077655A (zh) * 2021-03-18 2021-07-06 重庆车辆检测研究院有限公司 基于边缘计算的v2x场地在环测试方法和装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116449806A (zh) * 2023-06-14 2023-07-18 中汽智联技术有限公司 基于安全层信息的车辆信息融合控制功能测试方法和系统
CN116449806B (zh) * 2023-06-14 2023-09-01 中汽智联技术有限公司 基于安全层信息的车辆信息融合控制功能测试方法和系统
CN117094182A (zh) * 2023-10-19 2023-11-21 中汽研(天津)汽车工程研究院有限公司 V2v交通场景构建方法及v2x虚实融合测试系统
CN117094182B (zh) * 2023-10-19 2024-03-12 中汽研(天津)汽车工程研究院有限公司 V2v交通场景构建方法及v2x虚实融合测试系统

Also Published As

Publication number Publication date
CN113892088A (zh) 2022-01-04
US20240202401A1 (en) 2024-06-20
EP4379558A1 (en) 2024-06-05

Similar Documents

Publication Publication Date Title
US20210262808A1 (en) Obstacle avoidance method and apparatus
WO2022027304A1 (zh) 一种自动驾驶车辆的测试方法及装置
WO2021135371A1 (zh) 一种自动驾驶方法、相关设备及计算机可读存储介质
WO2023028858A1 (zh) 一种测试方法和系统
WO2021102955A1 (zh) 车辆的路径规划方法以及车辆的路径规划装置
CN114879631A (zh) 一种基于数字孪生云控平台的自动驾驶测试系统和方法
WO2022017307A1 (zh) 自动驾驶场景生成方法、装置及系统
CN115042821B (zh) 车辆控制方法、装置、车辆及存储介质
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
US20240017719A1 (en) Mapping method and apparatus, vehicle, readable storage medium, and chip
CN115330923B (zh) 点云数据渲染方法、装置、车辆、可读存储介质及芯片
CN115202234B (zh) 仿真测试方法、装置、存储介质和车辆
US20230324863A1 (en) Method, computing device and storage medium for simulating operation of autonomous vehicle
CN115056784B (zh) 车辆控制方法、装置、车辆、存储介质及芯片
CN115145246B (zh) 控制器的测试方法、装置、车辆、存储介质及芯片
CN112829762A (zh) 一种车辆行驶速度生成方法以及相关设备
CN115042814A (zh) 交通灯状态识别方法、装置、车辆及存储介质
CN115334111A (zh) 用于车道识别的系统架构、传输方法,车辆,介质及芯片
CN115205848A (zh) 目标检测方法、装置、车辆、存储介质及芯片
CN115334109A (zh) 用于交通信号识别的系统架构、传输方法,车辆,介质及芯片
WO2023000206A1 (zh) 语音声源定位方法、装置及系统
WO2023102827A1 (zh) 一种路径约束方法及装置
US20230192104A1 (en) Hybrid scenario closed course testing for autonomous vehicles
US20230399008A1 (en) Multistatic radar point cloud formation using a sensor waveform encoding schema
CN115412586A (zh) 任务识别方法、装置、车辆、可读存储介质及芯片

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21955420

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021955420

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2021955420

Country of ref document: EP

Effective date: 20240229

NENP Non-entry into the national phase

Ref country code: DE